Prompt Engineering for Agent Coding

Table of content

Good prompts turn mediocre AI output into production-ready code. The difference between “write a function” and a well-structured prompt is 30-40% better code quality.

Core Principles

Specificity wins. Vague prompts produce vague code. Describe:

Structure over prose. AI models parse structured input better than paragraphs. Use XML tags, lists, and clear sections.

Iterate, don’t restart. Refine outputs instead of rewriting prompts from scratch. Each iteration adds context.

Techniques That Work

1. XML Structuring

Wrap distinct parts of your prompt in tags. Models parse this cleanly.

<task>Build a rate limiter middleware</task>
<context>Express.js API, Redis backend</context>
<constraints>
- Max 100 requests per minute per IP
- Return 429 with retry-after header
- Log blocked requests
</constraints>

2. Role Assignment

Tell the agent what expert it should be. This activates relevant knowledge patterns.

You are a senior backend engineer reviewing code for a fintech startup.
Security and error handling are critical.

3. Break Down Complex Tasks

Don’t ask for an entire system. Ask for components in sequence.

Step 1: Design the data model for user sessions
Step 2: Implement the session store interface
Step 3: Add Redis adapter for the interface
Step 4: Write middleware that uses the session store

4. Provide Examples

Show the pattern you want. One example beats ten sentences of explanation.

Format functions like this:
- Public functions: PascalCase
- Private functions: camelCase with underscore prefix
- Example: `func ProcessOrder()` and `func _validateInput()`

Before and After

Weak prompt:

Write a function to validate emails

Strong prompt:

<task>Write an email validation function</task>
<language>TypeScript</language>
<requirements>
- Check format with regex
- Verify domain has MX record (async)
- Return { valid: boolean, reason?: string }
</requirements>
<constraints>
- No external libraries
- Throw on network errors
- Include JSDoc comments
</constraints>

The strong prompt produces code you can actually ship.

Anti-Patterns

The wall of text. Long paragraphs hide requirements. Use structure.

Assumed context. “Fix the bug” assumes the agent knows which bug. Always include error messages, file paths, expected behavior.

Over-specification. Don’t dictate implementation details unless necessary. “Use a for loop” limits better solutions.

No examples. Describing output format in words when you could show it.

One-shot complex requests. Asking for a complete app in one prompt. Break it into reviewable chunks.

Quick Reference

ElementPurposeExample
<task>What to build<task>Add pagination</task>
<context>Surrounding system<context>React + GraphQL</context>
<constraints>Hard requirements<constraints>No breaking changes</constraints>
<examples>Show, don’t tellCode samples
Role statementActivate expertise“You are a security engineer…”

The Feedback Loop

  1. Write structured prompt
  2. Review output critically
  3. Note what’s wrong or missing
  4. Add that to the prompt as a constraint
  5. Regenerate

Each cycle teaches you what the model needs to know.


Next: Building Your First Agent Workflow

Topics: prompting ai-coding ai-agents