Ty Dunn's Context Engineering for AI Coding
Table of content

Ty Dunn is the co-founder and CEO of Continue, the leading open-source AI code assistant with 23,000+ GitHub stars and 11,000+ Discord community members. His mission: amplify developers, not automate them. He coined “context engineering” as the discipline that separates productive AI coding from frustrating prompting.
Twitter | GitHub | Blog | LinkedIn
Background
- BS in Cognitive Science (Computer Science) from University of Michigan, 2015-2019
- Built dialogue management systems while studying language and computation
- First product manager at Rasa, grew to group PM
- Rasa’s open-source conversational AI framework reached millions of downloads and 17,000 GitHub stars
- Founded Continue in June 2023 with Nate Sesti
- Y Combinator Summer 2023
Context Engineering
Dunn identifies a core problem: AI coding assistants fail because of bad context, not bad models.
From his Context Engineering blog post:
“Bad context is worse than no context.”
When irrelevant or outdated information reaches the model, you get context poisoning. The AI becomes overconfident in incorrect information and propagates errors through your codebase.
| Problem | Cause |
|---|---|
| Context crisis | Knowledge lives in Zoom calls and hallway conversations |
| Scattered docs | Confluence, Jira, Slack, email have no unified access |
| Wrong structure | Documentation optimized for human reading, not AI consumption |
| Silent degradation | Information rots without visible breakage like code |
The Three Data Gaps
Dunn argues that source code repositories miss critical information for AI assistance:
- Process data - The step-by-step flow developers take to complete tasks
- Context data - What information informed each decision point
- Reasoning data - Natural language explaining the “why”
His solution: collect data from developer-LLM interactions. When developers accept or reject suggestions, edit completions, or iterate on prompts, they generate implicit feedback signals.
Continue’s Approach
Continue installs as a VS Code or JetBrains extension, connecting to any LLM provider:
{
"models": [
{
"title": "Claude 3.5 Sonnet",
"provider": "anthropic",
"model": "claude-3-5-sonnet-20241022"
}
],
"rules": [
{
"slug": "typescript-conventions",
"rule": "Use strict TypeScript. Prefer interfaces over types."
}
]
}
Key differentiators:
- Rules encode standards that apply automatically
- Context providers connect to Linear, Jira, incident systems
- Development data collection measures AI effectiveness
- Open-source core with Apache 2.0 license
Rules Files
Continue supports .continue/rules/ directories for project-specific context. From Dunn’s async context engineering article:
# verify-changes.md
When modifying .js or .ts files, always verify:
1. Import paths resolve correctly
2. Type definitions match usage
3. Tests pass after changes
These rules trigger automatically based on file type, catching errors like hallucinated import paths before commits.
Task Decomposition
Dunn describes a progression from supervised to autonomous AI assistance:
| Stage | Trust Level | Example |
|---|---|---|
| Supervised | Low | Accept/reject each suggestion |
| Assisted | Medium | AI drafts, human reviews |
| Autonomous | High | AI works independently, human verifies |
The pattern that emerges: repetitive tasks become obvious candidates for automation. Trust builds gradually through repetition.
Async Context Engineering
For developers working in fragmented sessions (Dunn mentions parenting as a common interrupt source), the workflow becomes:
- Set context once - Load architecture docs, rules, examples
- Let AI work - 15-minute autonomous bursts
- Verify on return - Automated quality gates catch errors
The rules act as persistent guardrails that don’t require re-prompting each session.
Key Takeaways
| Principle | Implementation |
|---|---|
| Engineer context, don’t prompt | Encode standards as rules, not ad-hoc instructions |
| Bad context poisons results | Filter irrelevant information aggressively |
| Collect development data | Track LLM interactions to measure effectiveness |
| Build sustainable processes | Treat context engineering as ongoing, not one-time |
| Trust builds through repetition | Start supervised, move to autonomous |
Links
- Continue.dev
- Continue GitHub
- Blog: Context Engineering
- Blog: Collect Development Data
- Blog: Async Context Engineering
- Y Combinator: Continue
- Heavybit Podcast: Possibilities with Ty Dunn
Next: Jesse Vincent’s Superpowers Framework
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.