Multi-Agent Content Pipeline with Claude Code

Table of content

Claude Code skills turn multi-step content creation into autonomous pipelines. Instead of manually prompting each stage—research, write, edit, optimize—you define the workflow once and let agents hand off to each other.

We built a discover-pioneers skill that generates articles about people building personal AI systems. 94 articles produced, each following the same quality process: research, photo acquisition, writing, humanization, SEO. Zero manual prompting after the initial trigger.

Skill structure

Skills live in .claude/skills/{name}/:

.claude/
└── skills/
    └── discover-pioneers/
        ├── index.md          # Main orchestrator
        └── prompts/
            ├── research.md   # Research agent instructions
            └── write.md      # Write agent instructions

The index.md file coordinates the pipeline. Each prompt file contains specialized instructions for subagents.

# index.md frontmatter
---
name: discover-pioneers
description: Find and profile people who build personal AI operating systems
---

Invoke with /skill discover-pioneers or just describe what you want—Claude matches the description.

The pipeline

Five stages, each handled by a specialized agent:

Research → Photo → Write → Humanizer → SEO
   ↓         ↓        ↓         ↓        ↓
 JSON     Image   Markdown   Clean    Optimized
 data     file     draft     prose    article

Stage 1: Research agent

Searches for candidates, scores them, returns structured data:

# Research Agent Prompt

Find a person who:
1. Documents how they personally use AI (not just builds tools)
2. Has a systematic approach to AI-augmented work
3. Shares their methodology publicly

## Candidate Scoring

| Criteria | Points |
|----------|--------|
| Documents personal daily AI workflow | +6 |
| Has systematic methodology (named system) | +5 |
| Builds tools for personal use | +4 |
| Writes about AI philosophy | +3 |
| Active content about personal AI | +3 |
| Open-sources configs (CLAUDE.md, prompts) | +2 |

Output is JSON with name, bio, links, personal_system details, quotes, key insights.

Stage 2: Photo agent

Finds a real human photo:

1. Check GitHub avatar, Twitter, personal site
2. Download to static/images/people/{slug}.jpg
3. Verify it shows a real human face (use image analysis)
4. If cartoon/logo, delete and try next source
5. If no photo found, continue without it

The verification step matters. Early runs pulled logos and avatars until we added face detection.

Stage 3: Write agent

Takes research JSON, produces article markdown:

## Article Structure

{Opening: Who is this person and what's unique? No header.}

## Background
- Previous work
- Current role
- What led them to build their system

## The System
**Daily workflow:** How they use AI day-to-day
**Core tools:** Table of what they use and how
**Key automations:** Specific things they've built

## Why This Matters
2-3 paragraphs of analysis. NOT bullet points.

## What You Can Steal
| Technique | How to Apply |
Actionable takeaways readers can copy.

The “What You Can Steal” section is mandatory. Every article must answer: what can the reader use today?

Stage 4: Humanizer

Removes AI writing patterns:

PatternFix
“In this article, we will explore”Cut entirely
“It’s important to note that”Just state the thing
“Leveraging”, “utilizing”“using”
Three adjectives in a rowPick one
“Let’s dive in”Delete

Run humanizer BEFORE SEO. SEO optimization can reintroduce patterns if you reverse the order.

Stage 5: SEO optimization

Two passes:

Meta optimizer:

Structure architect:

Duplicate filtering

The research prompt receives a list of existing people:

## Already Written (EXCLUDE these)

{{existing_people}}

The orchestrator reads content/people/*.md, extracts titles, injects them into the research prompt. Simple but effective.

# Pseudocode for what the orchestrator does
existing = []
for f in Path('content/people/').glob('*.md'):
    frontmatter = parse_frontmatter(f)
    existing.append(frontmatter['title'])

research_prompt = research_template.replace(
    '{{existing_people}}', 
    '\n'.join(existing)
)

Semantic search connection

The pipeline generates content. Semantic search makes it discoverable.

Each article gets indexed with vector embeddings:

from sentence_transformers import SentenceTransformer

model = SentenceTransformer('all-MiniLM-L6-v2')

def index_article(path: str):
    content = read_markdown(path)
    embedding = model.encode(content)
    db.execute(
        "INSERT INTO documents (path, content, embedding) VALUES (?, ?, ?)",
        [path, content, embedding.tobytes()]
    )

Queries find related content by meaning:

def search(query: str, limit: int = 5):
    query_vec = model.encode(query)
    # SQLite doesn't do vector similarity natively
    # Use numpy or a vector extension
    results = []
    for row in db.execute("SELECT path, content, embedding FROM documents"):
        doc_vec = np.frombuffer(row['embedding'])
        similarity = cosine_similarity(query_vec, doc_vec)
        results.append((similarity, row))
    return sorted(results, reverse=True)[:limit]

See Personal Search for the full implementation.

Batch processing with cron

Run the skill on a schedule:

# In Clawdbot config
cron:
  - schedule: "0 9 * * *"  # 9am daily
    task: "/skill discover-pioneers"

Or trigger manually when you want a batch:

# Generate 5 articles
for i in {1..5}; do
  claude "/skill discover-pioneers"
  sleep 60  # Rate limiting
done

Results

94 people articles generated from the same skill definition:

MetricValue
Articles generated94
Avg research time~2 min
Avg write time~3 min
Human review time~1 min
Manual prompting0

Quality stays consistent because the instructions don’t drift. The 94th article follows the same structure as the first.

What You Can Steal

TechniqueHow to Apply
Prompt files per agentSplit complex skills into prompts/ folder
Scoring in researchAdd point system to prioritize candidates
Duplicate injectionRead existing content, pass to research prompt
Humanizer-then-SEO orderAlways clean prose before optimizing
Mandatory sectionsRequire “What You Can Steal” or equivalent
Photo verificationCheck downloaded images are what you expect

Building your own

Start with a simpler pipeline:

.claude/skills/
└── my-content-pipeline/
    ├── index.md
    └── prompts/
        └── research.md

index.md:

---
name: my-content-pipeline
description: Generate content about [your topic]
---

## Process

1. Research: Launch agent with prompts/research.md
2. Write: Use research output to draft article
3. Review: Check for patterns to remove
4. Save: Write to content/[section]/

## Research prompt injection

{{existing_articles}}

Add stages as you find quality gaps. The discover-pioneers skill grew from 2 stages to 5 over weeks of iteration.


Next: The Skills System

Topics: claude-code automation workflow agents