Cory Zue's Practical AI Coding Workflows

Table of content

Cory Zue is a solopreneur based in Cape Town, South Africa, running SaaS Pegasus—a Django boilerplate—and Scriv—a custom AI chatbot platform. He’s been documenting his journey from CTO at Dimagi to independent maker since 2017, and increasingly writes about how AI is reshaping his work.

What makes Zue’s perspective valuable is his position: he’s both building AI tools and wondering if AI will make his main product obsolete.

MCP: “Five Lines of JSON Can Replace Multi-Million Dollar SaaS Companies”

In April 2025, Zue spent a day experimenting with Model Context Protocol (MCP) and came away impressed. His friend had texted him that “Claude Code, well configured, is a 10xer”—but the key was “well configured.”

Zue set up the Postgres MCP server with a few lines of JSON and started asking questions about his database:

“On a complete lark, I asked it ‘Can you generate a plot showing the average duration between signing up and creating your first project?’ And—I shit you not—it one-shotted a Python script that outputted this chart (including coloring and annotations).”

The cost: 63 cents.

He also experimented with Microsoft’s Playwright MCP server to give Claude browser control. He pointed Claude at a URL generating a 500 error, and it loaded the page, read the stack trace, found and fixed the bug, and confirmed the fix worked.

His takeaway:

“With the right set up, I can see a path to a world where agents find tickets in your issue tracker, fix them, verify the fixes with tests and in a browser, submit pull requests, and do code review.”

Claude GitHub Actions: Hype Meets Reality

When Anthropic released GitHub Actions support for Claude Code, Zue was excited. You could tag Claude on an issue and get a working PR in five minutes.

But after his initial experiment, he stopped using it. Why?

Specifying work is hard. Most of his backlog items were either underspecified notes (“little notes I had written down to remind myself of something”) or fuzzy goals (“is there a better way to handle mobile menus?”). Writing enough context to make tasks work for Claude often took as long as doing them himself.

Taste is subjective. Claude one-shotted a functional solution, but Zue didn’t like its decisions—the UI, the naming, extraneous changes. He ended up rewriting much of it.

“A surprising number of my roadmap items are phrased in the form of a question… Others just a vague goal. Again—I can probably restructure these in a way that would allow the LLMs to make progress on them, but that itself is a big chunk of work!”

Still, he found one clear benefit: Claude gets projects rolling. Starting is the hardest part, and having a decent starting point helped him get into flow faster.

Scriv: Building a RAG Product

In 2023, Zue built Scriv—an AI chatbot trained on your own data. The idea came from his Pegasus Slack community, where questions often had answers buried in documentation or past conversations.

The technical foundation is retrieval augmented generation (RAG). He wrote a thorough explainer breaking down how it works:

  1. User asks a question
  2. System retrieves relevant chunks from a knowledge base
  3. LLM reads those chunks and answers using that context

He trained Scriv on both his documentation and Slack history. The community started mentioning the bot on every support question.

His honest assessment:

“When it works it really is like magic… But chatbots are finicky beasts, and chatbots that work on custom data are doubly so. If the answer isn’t in the data, the bot will fail, or—even worse—hallucinate a totally wrong answer out of thin air.”

Will AI Kill Boilerplates?

Zue’s most existential writing is about whether AI will make his main product obsolete.

The bear case: AI can already generate impressive apps, and it’s only getting better. Why start from a generic boilerplate when you can have AI build exactly what you want?

The bull case: AI is unreliable (hallucinations), hits walls (frustrating iteration loops that go nowhere), and won’t give you confidence the way a human-curated, battle-tested codebase does.

His practical belief:

“A codebase that’s been obsessively perfected for years (with the help of AI) will still be better than a one-shot AI-generated codebase.”

He expects AI will expand the market—more people coding means more potential customers who need solid foundations. But he also acknowledges:

“I don’t claim to know where things are going, but I know that doing my best to stay up to date on the latest AI developments is not only advisable, but possibly existential to my livelihood.”

Core Principles

From Zue’s writing and experiments, a few patterns emerge:

Use AI to start, not finish. Claude excels at getting projects rolling and handling the drudgery of figuring out which files to edit. Humans still add value in taste, edge cases, and understanding context.

Invest in configuration. The difference between “Claude is useful” and “Claude is a 10xer” is setup: good CLAUDE.md files, MCP servers for your tools, well-defined commands.

Be honest about limitations. Scriv works, but it’s “more like an intern than an oracle.” Claude GitHub Actions are impressive, but specifying work well takes real effort.

Stay close to the wave. AI will change how software gets made. The Barnes and Nobles and Blockbusters of software are those who don’t adapt.

Resources