The Three-Layer Workflow
Table of content
Most people use the same AI tool for everything. Big prompts, let it figure things out, hope for the best. This produces inconsistent results and wastes tokens on simple tasks.
The solution: match your tool to the task. Andrej Karpathy breaks his coding time into three distinct layers. Harper Reed structures work into discrete phases. Both approaches share the same insight: different tools for different jobs.
Karpathy’s three layers
| Layer | Usage | Tool | Purpose |
|---|---|---|---|
| Tab completion | 75% | Cursor, Copilot | Line-by-line coding, boilerplate |
| Agentic tools | 20% | Claude Code, Codex | Multi-file changes, refactors |
| Reasoning models | 5% | o3, Claude with extended thinking | Architecture, algorithms, debugging |
The percentages are rough guides, not rules. But they shift how you think about which tool to reach for.
Layer 1: Tab completion (75%)
This is where most coding happens. You write code and comments. The AI completes lines and suggests obvious patterns. You stay in control.
Best for:
- Writing new code in familiar patterns
- Boilerplate and repetitive structures
- Quick edits where you know exactly what you want
Why it dominates: Higher bandwidth than prompting. Writing // fetch user from database and return 404 if not found then letting tab complete the implementation is faster than explaining it in a chat window.
Layer 2: Agentic tools (20%)
Claude Code, Cursor Composer, Aider. Describe what you want, the AI reads your codebase, proposes changes across multiple files. You review and iterate.
Best for:
- Refactoring across many files
- Adding features that touch multiple modules
- Working in unfamiliar languages or frameworks
The trade-off: Agents tend to overcomplicate abstractions, over-use try/catch, and lack stylistic consistency. Review everything.
Layer 3: Reasoning models (5%)
Reserved for when you’re genuinely stuck. Architecture decisions, algorithms you don’t understand, bugs that make no sense.
Best for:
- Evaluating architectural approaches
- Understanding complex algorithms
- Finding subtle bugs after other methods fail
- Surfacing documentation or papers you didn’t know existed
How to use it: Load all context. Ask for approaches, not code. Evaluate options. Then implement with Layer 1 or 2.
Harper Reed’s discrete phases
Reed structures AI-assisted development into three sequential phases:
| Phase | Tool | Output | Time |
|---|---|---|---|
| Brainstorm | GPT-4o | Refined spec | ~15 min |
| Plan | Reasoning model | Implementation steps | ~15 min |
| Execute | Claude Code / Aider | Working code | Varies |
His summary: “Brainstorm spec, then plan a plan, then execute using LLM codegen. Discrete loops.”
Phase 1: Brainstorm
Chat with a conversational model to develop requirements. The magic prompt:
“Ask me one question at a time so we can develop a thorough, step-by-step spec for this idea.”
Output: spec.md containing clear requirements.
Phase 2: Plan
Feed the spec to a reasoning model. Ask it to break the work into small, testable steps. Each step builds on previous steps. No orphaned code.
Output: prompt_plan.md containing sequential prompts for execution.
Phase 3: Execute
Follow the plan sequentially. Paste each prompt, run the code, verify, move to the next step.
Why phases shouldn’t bleed together
| Mixing phases | Result |
|---|---|
| Brainstorm while executing | Scope creep, lost direction |
| Skip planning | “Over your skis” — losing control mid-implementation |
| Plan without brainstorming | Building the wrong thing |
| Execute without context | LLM generates “ridiculous things” unrelated to requirements |
Poor planning makes execution chaotic and expensive. The upfront time is always recovered in avoided rework.
Tool selection by layer
| Task | Layer | Tool |
|---|---|---|
| Write a function | 1 | Tab completion |
| Add logging across codebase | 2 | Claude Code |
| Design database schema | 3 | Reasoning model |
| Fix a typo | 1 | Tab completion |
| Implement new API endpoint | 2 | Claude Code |
| Debug race condition | 3 | Reasoning model |
| Rename variable across files | 2 | Claude Code |
| Choose between Redis and Memcached | 3 | Reasoning model |
When in doubt: start with the lower layer. Escalate only when stuck.
The hero’s journey for beginners
Reed’s progression for newcomers:
| Stage | Tool | What you learn |
|---|---|---|
| 1 | Copilot | How AI completions work |
| 2 | Claude web (copy-paste) | Manual but educational |
| 3 | Cursor/Continue | Editor integration |
| 4 | Full agents | Claude Code, Aider |
Don’t skip stages. Jumping straight to agents is “annoying and weird.” Each stage builds intuition for the next.
Critical tips:
- Use paid models. Free versions produce results that make you give up prematurely.
- Develop writing skills. Clear prompting matters as much as coding ability.
- Accept the learning curve. This takes time.
Common mistakes mixing layers
| Mistake | Why it fails | Fix |
|---|---|---|
| Using agents for simple edits | Slower, expensive, over-engineered results | Tab complete instead |
| Tab completing architecture decisions | No exploration of alternatives | Use reasoning model |
| Reasoning model for implementation | Overkill, worse at actual code | Agent or tab complete |
| One tool for everything | Wrong fit for most tasks | Match tool to task |
| Vibe coding production code | Can’t debug what you don’t understand | Use layers appropriately |
The pattern: tools optimized for one thing do that thing well. General-purpose usage produces mediocre results.
Getting started progression
Week 1-2: Tab completion only
- Install Cursor or enable Copilot
- Write code normally, accept good completions
- Learn when AI helps vs. when it gets in the way
Week 3-4: Add planning
- Before coding, write specs in a separate chat
- Practice the “one question at a time” brainstorming
- Create
prompt_plan.mdbefore implementation
Month 2: Introduce agents
- Try Claude Code or Aider for a refactoring task
- Learn to review AI-generated diffs
- Understand when multi-file changes make sense
Ongoing: Reasoning when stuck
- Load full context before asking
- Request approaches, not implementations
- Evaluate trade-offs, then implement with lower layers
The mental model
Simple task? → Tab completion (Layer 1)
Multi-file change? → Agent (Layer 2)
Genuinely stuck? → Reasoning model (Layer 3)
New project? → Brainstorm → Plan → Execute
Know which layer you’re in. Know which phase you’re in. Mixing them is where things go sideways.
Next: Plan Mode in Claude Code
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.