Cory Zue on AI-Augmented Solo Development
Table of content
Cory Zue spent a decade as CTO of Dimagi before taking a sabbatical that turned into full-time indie hacking. Now based in Cape Town, he runs SaaS Pegasus, a Django boilerplate that helps developers launch SaaS products faster. He also built Scriv, an AI chatbot for company data.
What makes Cory’s perspective valuable: he’s not just using AI tools—he’s also watching them potentially disrupt his main business. This creates unusually honest analysis of both the capabilities and limitations of AI-assisted development.
The MCP Moment
In April 2025, Cory wrote about his first real experience with Model Context Protocol servers. A friend texted him: “Claude Code, well configured, is a 10xer.” Skeptical but curious, he spent a day experimenting.
The setup was surprisingly simple. Five lines of JSON gave Claude access to his PostgreSQL database:
{
"mcpServers": {
"postgres": {
"command": "npx",
"args": ["-y", "@modelcontextprotocol/server-postgres", "postgresql://..."]
}
}
}
He started asking questions about his Pegasus customer data. How many users signed up? What’s the most popular CSS framework? Then he got ambitious: “Can you generate a plot showing the average duration between signing up and creating your first project?”
Claude one-shotted a Python script that produced a fully annotated chart. The whole exchange cost 63 cents.
“Companies have invested millions upon millions of dollars inventing ‘chat with your database’ tools, and now you can build your own with a few lines of JSON.”
He added the Playwright MCP server next. Claude loaded a page throwing 500 errors, read the stack trace, found the bug, fixed it, and verified the fix in the browser. For debugging, the workflow felt almost magical.
The Reality Check
Then came the experiments with Claude’s GitHub Actions integration. Tag an issue with @claude, and it picks up the work, opens a PR, responds to feedback. Wild stuff.
But after the initial excitement, Cory stopped using it. Why?
First problem: specifying work is hard. He tried grabbing tasks from his 100-card Trello backlog. Small tasks were underspecified—notes he’d written for himself that would take longer to explain than to just do. Large tasks often started with figuring out what work to do. A surprising number were phrased as questions: “Is there a better way to handle mobile menus?”
“Anyone who’s ever had to resource junior engineers will be familiar with this feeling.”
Second problem: taste is subjective. Claude one-shotted a perfectly functional feature, but Cory didn’t like the decisions it made. He ended up modifying the UI, renaming things, removing unnecessary changes. The end-to-end result didn’t actually save much time.
The caveat: he was working on Pegasus, a mature codebase where code quality matters to customers. The low-hanging fruit was already picked. For new projects or vibe-coding throwaway apps, the calculus changes completely.
Thinking Through the Threat
Cory sells code for a living. So when AI started writing better code, he had to think seriously about what happens to boilerplate businesses.
The bear case is straightforward: the moment anyone can type “build me a Django app with auth, billing, and multitenancy” and get something as useful as SaaS Pegasus, the product is in trouble.
But he makes a more nuanced argument for why boilerplates still matter:
AI is unreliable. The hallucination problem isn’t solved. Generated projects might look great but have problems lurking underneath.
AI hits walls. The error-solution-doesn’t-work-try-again loop eventually leads to giving up and finding answers in some GitHub issue.
AI won’t give you confidence. As a developer working in a new stack, you want to know you’re making sound technical choices. Humans still provide that better than models.
His conclusion: a codebase that’s been obsessively perfected for years (with AI help) will still beat a one-shot AI-generated codebase. The question is how close the asymptote gets, and when people stop paying for the differentiation.
“Boilerplate quality will become more important. The era of coding up boilerplates in a weekend and selling them to uninformed developers on Twitter is going to wind down.”
The Practical Workflow
Cory’s actual day-to-day approach combines several elements:
Claude.md file: Clear instructions that tell Claude how to work with the codebase, what commands to run, project conventions.
MCP servers for tools: Database access, browser control, whatever else the workflow needs.
Starting points over full automation: AI excels at getting projects rolling. Starting is often the hardest part, and having Claude code up a decent starting point kicks him into flow state faster than starting from scratch.
Human finishing: The drudgery of figuring out which model/view/template files to edit is a nice time-saver. But the final taste—naming things well, keeping code clean—still requires human judgment.
On Building AI Products
Beyond using AI tools, Cory built one: Scriv, a RAG-based chatbot for company data. His guide to retrieval augmented generation breaks down how these systems work in plain language.
The core insight: RAG is remarkably simple in concept. Take a user’s question, search for relevant content from a knowledge base, send both to an LLM, let it synthesize an answer. The depth is in the implementation details—how you chunk documents, how you retrieve them, what prompts you use.
His system prompt example is refreshingly direct:
“You are a Knowledge Bot. You will be given the extracted parts of a knowledge base (labeled with DOCUMENT) and a question. Answer the question using information from the knowledge base.”
We’re basically saying, he notes, “Hey AI, we’re gonna give you some stuff to read. Read it and then answer our question, k? Thx.”
The Solopreneur Perspective
Cory documents everything publicly—eight years of monthly retrospectives, revenue numbers, emotional ups and downs. This transparency gives his AI analysis unusual grounding.
He’s not an AI researcher theorizing about capabilities. He’s an indie hacker watching his livelihood potentially get automated while also using these tools daily. That tension produces honest assessments of both the promise and the gaps.
His practical conclusion: AI will completely change how software gets made. To not get left behind, you need to be near the front of that wave. What exactly that means is unclear, but staying up to date on AI developments isn’t just advisable—it’s possibly existential to his livelihood.
“What a time to be alive.”
Links
- Personal site and writing
- SaaS Pegasus
- MCP deep-dive article
- AI and boilerplates analysis
- RAG overview guide
- GitHub
- Twitter/X
- YouTube
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.