Task Decomposition for AI Documentation
Table of content
AI can write documentation. But asking for a complete document in one prompt produces inconsistent quality, invented facts, and content that misses your audience. The fix: decompose documentation tasks into discrete steps.
Tom Johnson, a technical writer at Google who has doubled his output with AI, frames it this way: the more you break complex processes into discrete steps, the better the outcome.
The Five-Step Iterative Workflow
1. Gather Source Material
AI output quality depends on input quality. Before prompting, collect:
- Engineer interviews (recordings or transcripts)
- Internal design docs and specs
- Code samples and API responses
- Existing documentation to update
- Bug threads and support tickets
Load 100+ pages into large-context models (Gemini handles 700K+ tokens). The AI can’t document what it doesn’t know.
2. Distill with AI
Use AI to process raw material before writing. Ask for:
Summarize the key concepts from these engineering docs.
What are the main user-facing features?
What prerequisites does the reader need?
What gaps exist in this documentation?
You end up with extracted facts instead of hallucinations.
3. Generate Section by Section
Don’t ask for complete documents. Work through your outline one topic at a time.
| Request Type | AI Effectiveness |
|---|---|
| “Write the full API reference” | Low - misses context, invents endpoints |
| “Write the authentication section” | Medium - may still drift |
| “Write a 3-paragraph explanation of OAuth flow for this API” | High - focused, verifiable |
Each section stays grounded in your source material.
4. Apply Chain of Thought
Separate evaluation from action. This prevents the AI from glossing over problems.
Pass 1: Identify issues
Review this section for:
- Technical accuracy against the source docs
- Missing prerequisites
- Unclear explanations
- Assumed knowledge that should be explicit
Pass 2: Fix issues
Based on the issues identified, rewrite the section.
Keep the same structure but address each problem.
Two passes catch errors one pass buries.
5. Review and Iterate
Multiple cycles for different concerns:
| Review Pass | Focus |
|---|---|
| Technical | Facts match source material |
| Accuracy | Code examples run correctly |
| Editorial | Style guide compliance |
| Audience | Appropriate for skill level |
Each pass adds a layer of quality. Skipping passes shows in the output.
Task Size and AI Effectiveness
The relationship between task scope and output quality:
| Task Scope | Quality | Why |
|---|---|---|
| Full document | Low | Too many decisions, too much context drift |
| Chapter/section | Medium | Manageable but still accumulates errors |
| Single concept | High | Focused and verifiable |
| Code example | Highest | Concrete, testable, minimal interpretation |
Start small. Assemble larger pieces from verified components.
Prompt Templates
Source Distillation
<task>Extract documentation requirements</task>
<sources>
[Paste engineer interview transcript]
[Paste design doc excerpts]
</sources>
<output>
- Key concepts to document
- Required prerequisites
- Code examples needed
- Questions to clarify with engineers
</output>
Section Generation
<task>Write documentation section</task>
<topic>User authentication flow</topic>
<audience>Backend developers familiar with REST APIs</audience>
<source-facts>
- OAuth 2.0 with PKCE
- Tokens expire in 1 hour
- Refresh tokens last 30 days
</source-facts>
<constraints>
- 3-4 paragraphs max
- Include one code example
- Link to full OAuth spec for details
</constraints>
Chain-of-Thought Review
<task>Review then improve</task>
<section>
[Paste draft section]
</section>
<step-1>
List specific problems with technical accuracy,
clarity, and completeness. Number each issue.
</step-1>
<step-2>
Rewrite the section addressing each numbered issue.
</step-2>
Common Mistakes
| Mistake | What Happens | Fix |
|---|---|---|
| One-shot full document | Hallucinated facts, inconsistent tone | Decompose into sections |
| No source material | AI invents plausible-sounding details | Load sources first |
| Skip evaluation pass | Problems get buried | Separate identify from fix |
| Accept first draft | Raw AI output needs editing | Multiple review cycles |
| Wrong task size | Too broad or too narrow | Match to verifiability |
The Review Bottleneck
Decomposition creates more pieces to review. This is the trade-off: better quality requires more human attention per section, but each review is simpler.
Mitigation strategies:
- Batch similar sections. Review all code examples together, all concept explanations together.
- Template verification. Create checklists for each section type.
- Source linking. Require AI to cite which source doc each fact comes from.
The review bottleneck is real, but it beats publishing hallucinated documentation.
Tool Matching
Different AI tools excel at different documentation tasks:
| Task | Recommended Tool | Why |
|---|---|---|
| Distilling large source sets | Gemini | 700K+ token context |
| Creative structuring | Claude | Strong at organization |
| Grammar and style | ChatGPT | Good at polish |
| Code example generation | Claude Code | Can verify code runs |
Match the tool to the task. See Three-Layer Workflow for the broader principle.
Practical Workflow
A realistic documentation session:
- Load source docs into Gemini
- Ask for key concepts and gaps
- Create outline from distilled material
- Generate each section in Claude with structured prompts
- Chain-of-thought review each section
- Assemble and do editorial pass
- Technical review with engineers
Total time: longer than asking for a full doc in one prompt. Quality: noticeably better.
When This Doesn’t Apply
Some documentation benefits from full-document generation:
- Glossaries. Lists of definitions are self-contained.
- Changelogs. Structured, repetitive, low interpretation.
- Boilerplate. License text, standard disclaimers.
For anything requiring technical accuracy or audience awareness, decompose.
Next: Tom Johnson’s AI Technical Writing
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.