Nick Dobos — megaprompts and CLAUDE.md craft
Table of content
by Ray Svitla
most people write prompts like text messages — quick, informal, hoping the AI figures out what they mean. Nick Dobos writes prompts like engineering specifications. that difference sounds minor until you see the outputs.
Dobos (@NickADobos) is one of those people who lives on the frontier of Claude Code usage. not as a researcher or an employee of Anthropic, but as a practitioner who treats system prompts and CLAUDE.md files as first-class engineering artifacts. his megaprompt approach — structured, detailed, opinionated instruction sets — produces results that generic prompting doesn’t.
the megaprompt philosophy
a megaprompt isn’t just a long prompt. it’s a structured document that gives the AI a complete operating context: role, constraints, output format, priorities, anti-patterns, examples, and meta-instructions about how to handle ambiguity.
where most people write:
review this code and suggest improvements
a megaprompt approach looks more like:
## role
senior backend engineer specializing in Node.js performance
## task
review the attached code for performance issues
## priorities (in order)
1. O(n²) or worse algorithms in hot paths
2. unnecessary database queries (N+1 problems)
3. blocking operations in async contexts
4. memory leaks from unclosed resources
## output format
for each issue:
- file and line number
- what's wrong (one sentence)
- suggested fix (code snippet)
- severity: critical / moderate / minor
## constraints
- ignore style issues entirely
- ignore test files unless they test performance
- if unsure about severity, mark as moderate
the difference is precision. the short prompt produces a generic code review. the megaprompt produces a focused performance audit. same model, same code, dramatically different output.
CLAUDE.md as megaprompt
Dobos’s insight that resonates most with the self.md philosophy: your CLAUDE.md file is a megaprompt. it’s a persistent system instruction that shapes every interaction in the project. treating it as a casual config file is like treating your database schema as a rough draft.
he’s been vocal about Claude Code’s system prompt architecture — interested in what Anthropic includes by default, what can be overridden, and where the boundaries are. when Claude Code added the option to disable the coding-focused parts of its system prompt, Dobos flagged it immediately as significant: it meant Claude Code could become a general-purpose agent, not just a coding assistant.
this matters more than it sounds. if your CLAUDE.md can shape Claude Code’s behavior for non-coding tasks — research, writing, analysis, workflow automation — then the instruction file becomes the control surface for a general-purpose AI agent. the prompt is the product.
the craft argument
there’s a counter-argument floating around that prompt engineering is dead. that models are smart enough now that you don’t need careful prompting. just tell the AI what you want and it’ll figure it out.
Dobos’s work is a quiet rebuttal. yes, Claude can produce decent output from casual prompts. but “decent” and “exactly what you need” are different things. the gap between them is prompt craft.
the analogy I keep coming back to: you can tell a contractor “build me a house” and you’ll get a house. it’ll have walls, a roof, doors, windows. it’ll be fine. or you can give them architectural drawings, material specs, and a clear vision, and you’ll get your house. the contractor’s skill matters in both cases, but the drawings determine whether the result matches your intent.
megaprompts are architectural drawings. they don’t make the AI smarter. they make the AI’s intelligence useful in the specific direction you need.
what I’ve taken from his approach
watching Dobos work has influenced how I think about CLAUDE.md files for the self.md project . specifically:
priority ordering matters. listing five things you want is different from ranking five things by importance. Claude makes tradeoffs constantly. if you don’t tell it which dimension to prioritize, it optimizes for whatever feels “balanced,” which often means mediocre on every dimension.
anti-patterns are instructions. telling Claude what not to do is as important as telling it what to do. “don’t suggest obvious improvements” or “skip style issues” — these constraints focus attention on what actually matters.
meta-instructions change behavior. “if you’re unsure, ask instead of guessing” or “when a task is ambiguous, pick the simpler interpretation” — instructions about how to handle edge cases reduce the variance in outputs.
specificity is kindness. a vague instruction forces Claude to guess what you want, and guessing introduces noise. a specific instruction removes guesswork. it’s not about controlling the AI — it’s about communicating clearly, same as with humans.
the broader point
prompt craft isn’t about manipulating AI. it’s about clear communication. the skills that make a megaprompt effective — specificity, priority ordering, explicit constraints, good examples — are the same skills that make a project brief effective, or a technical spec, or a job posting.
the people who are best at prompting AI are usually people who were already good at communicating requirements to humans. the medium is different, the skill is the same.
do you write your CLAUDE.md like a rough note, or like something you’d hand to a new team member on their first day?
→ why your CLAUDE.md sucks — common instruction file mistakes → CLAUDE.md guide — how to write it right → context engineering — the bigger picture
Ray Svitla stay evolving