Prompt Engineering for Agent Coding
Table of content
Good prompts turn mediocre AI output into production-ready code. The difference between “write a function” and a well-structured prompt is 30-40% better code quality.
Core Principles
Specificity wins. Vague prompts produce vague code. Describe:
- What you want built
- Why it exists (context shapes decisions)
- Constraints (language, framework, style)
Structure over prose. AI models parse structured input better than paragraphs. Use XML tags, lists, and clear sections.
Iterate, don’t restart. Refine outputs instead of rewriting prompts from scratch. Each iteration adds context.
Techniques That Work
1. XML Structuring
Wrap distinct parts of your prompt in tags. Models parse this cleanly.
<task>Build a rate limiter middleware</task>
<context>Express.js API, Redis backend</context>
<constraints>
- Max 100 requests per minute per IP
- Return 429 with retry-after header
- Log blocked requests
</constraints>
2. Role Assignment
Tell the agent what expert it should be. This activates relevant knowledge patterns.
You are a senior backend engineer reviewing code for a fintech startup.
Security and error handling are critical.
3. Break Down Complex Tasks
Don’t ask for an entire system. Ask for components in sequence.
Step 1: Design the data model for user sessions
Step 2: Implement the session store interface
Step 3: Add Redis adapter for the interface
Step 4: Write middleware that uses the session store
4. Provide Examples
Show the pattern you want. One example beats ten sentences of explanation.
Format functions like this:
- Public functions: PascalCase
- Private functions: camelCase with underscore prefix
- Example: `func ProcessOrder()` and `func _validateInput()`
Before and After
Weak prompt:
Write a function to validate emails
Strong prompt:
<task>Write an email validation function</task>
<language>TypeScript</language>
<requirements>
- Check format with regex
- Verify domain has MX record (async)
- Return { valid: boolean, reason?: string }
</requirements>
<constraints>
- No external libraries
- Throw on network errors
- Include JSDoc comments
</constraints>
The strong prompt produces code you can actually ship.
Anti-Patterns
The wall of text. Long paragraphs hide requirements. Use structure.
Assumed context. “Fix the bug” assumes the agent knows which bug. Always include error messages, file paths, expected behavior.
Over-specification. Don’t dictate implementation details unless necessary. “Use a for loop” limits better solutions.
No examples. Describing output format in words when you could show it.
One-shot complex requests. Asking for a complete app in one prompt. Break it into reviewable chunks.
Quick Reference
| Element | Purpose | Example |
|---|---|---|
<task> | What to build | <task>Add pagination</task> |
<context> | Surrounding system | <context>React + GraphQL</context> |
<constraints> | Hard requirements | <constraints>No breaking changes</constraints> |
<examples> | Show, don’t tell | Code samples |
| Role statement | Activate expertise | “You are a security engineer…” |
The Feedback Loop
- Write structured prompt
- Review output critically
- Note what’s wrong or missing
- Add that to the prompt as a constraint
- Regenerate
Each cycle teaches you what the model needs to know.
Next: Building Your First Agent Workflow
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.