Prompt Engineering Patterns That Work
Table of content
Raw prompts get raw results. Patterns add structure that LLMs can follow.
This isn’t theory. These are patterns that ship in production systems — from Anthropic’s Claude to OpenAI’s function calling. Learn five, use them forever.
Chain-of-Thought (CoT)
Force the model to show its work. Accuracy jumps on math, logic, and multi-step problems.
Zero-shot CoT — just add “Let’s think step by step”:
Prompt:
I bought 10 apples. Gave 2 to my neighbor, 2 to the repairman.
Bought 5 more. Ate 1. How many left?
Let's think step by step.
Response:
Started with 10.
Gave away 2 + 2 = 4. Left with 6.
Bought 5 more. Now have 11.
Ate 1. Left with 10.
Without CoT, the model often says “11” — it forgets the apple you ate.
Few-shot CoT — provide worked examples:
Q: The odd numbers in 4, 8, 9, 15, 12, 2, 1 add to even?
A: Odd numbers are 9, 15, 1. Sum is 25. Answer: False.
Q: The odd numbers in 15, 32, 5, 13, 82, 7, 1 add to even?
A:
Model learns the format and reasoning pattern from your example.
When to use: Math, logic puzzles, code debugging, any multi-step task.
Few-Shot Prompting
Show the model what you want. Examples beat instructions.
Prompt:
Text: "This movie was terrible" → Negative
Text: "Best meal I've ever had" → Positive
Text: "The service was slow but food was decent" →
Response:
Mixed
Three examples taught classification better than a paragraph of rules would.
Tips from the research:
- Format matters more than correct labels. Even random labels work if the structure is consistent.
- Pick diverse examples that cover edge cases.
- 3-5 examples is usually enough. More helps complex tasks.
When to use: Classification, extraction, format conversion, anything where “show don’t tell” applies.
ReAct (Reason + Act)
Interleave thinking with actions. The model reasons, acts, observes, repeats.
Question: What's the population of the city where SpaceX is headquartered?
Thought 1: I need to find where SpaceX is headquartered.
Action 1: Search[SpaceX headquarters]
Observation 1: SpaceX headquarters is in Hawthorne, California.
Thought 2: Now I need the population of Hawthorne, California.
Action 2: Search[Hawthorne California population]
Observation 2: Population is approximately 88,000.
Thought 3: I have the answer.
Action 3: Finish[88,000]
ReAct combines chain-of-thought with tool use. The model explains why it takes each action, making it debuggable.
When to use: Agent systems, tool-calling, research tasks, anything that needs external data.
Prompt Chaining
Break complex tasks into discrete steps. Output of step N feeds step N+1.
┌──────────────┐ ┌──────────────┐ ┌──────────────┐
│ Extract data │ ──▶ │ Analyze │ ──▶ │ Format │
│ from docs │ │ patterns │ │ report │
└──────────────┘ └──────────────┘ └──────────────┘
Example — code review chain:
# Step 1: Extract issues
issues = llm("List bugs and issues in this code: {code}")
# Step 2: Prioritize
prioritized = llm("Rank these issues by severity: {issues}")
# Step 3: Generate fixes
fixes = llm("Write fixes for top 3 issues: {prioritized}")
Each step has clear input/output. Easier to debug, test, and iterate.
When to use: Long-form content, multi-stage analysis, any task you’d naturally break into subtasks.
Role Prompting (System Prompts)
Give the model a persona. Changes tone, expertise level, and response style.
System: You are a senior backend engineer who writes Go.
You prefer simple solutions over clever ones. You always
consider error handling and edge cases.
User: How should I handle database connection retries?
The model now responds as that engineer — opinionated, Go-focused, practical.
Effective roles:
- “You are a skeptical code reviewer” — catches more bugs
- “You are a teacher explaining to a beginner” — simpler explanations
- “You are a security researcher” — finds vulnerabilities
- “You are the user’s pair programmer” — collaborative tone
When to use: Every prompt. System prompts set baseline behavior.
Pattern Combinations
Real systems combine patterns:
| Pattern | + | Pattern | Use Case |
|---|---|---|---|
| Few-shot | + | CoT | Complex reasoning with examples |
| ReAct | + | Prompt chain | Agent workflows |
| Role | + | Few-shot | Domain-specific tasks |
Claude’s extended thinking is CoT on steroids. OpenAI’s function calling is ReAct without the explicit reasoning. GPT-4’s code interpreter chains prompts automatically.
Anti-Patterns
Vague instructions: “Make it better” fails. “Reduce to 100 words, keep technical accuracy” works.
No examples: Instructions alone miss edge cases. One example clarifies more than ten rules.
Single giant prompt: Break it up. Chains beat monoliths.
Ignoring the system prompt: You’re leaving the strongest lever untouched.
What You Can Steal
Add “Let’s think step by step” to any reasoning task. Free accuracy boost.
Start prompts with examples. Even one example beats pure instructions.
Chain for complex tasks. Output → Input → Output. Each step testable.
Use system prompts always. “You are a senior engineer” beats “please write good code.”
Try ReAct for tool use. Thought/Action/Observation loops make agents debuggable.
Implementation
Quick Python pattern for chaining:
def chain(*steps):
def run(input):
result = input
for step in steps:
result = step(result)
return result
return run
# Usage
review = chain(
lambda code: llm(f"Find bugs in: {code}"),
lambda bugs: llm(f"Prioritize: {bugs}"),
lambda priority: llm(f"Fix top issue: {priority}")
)
fixes = review(my_code)
For agent systems, see Agentic Design Patterns.
Next: Context Management — how to feed the right information to your prompts
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.