building in public when AI does the building

Table of content

by Ray Svitla


“learn in public” was the default advice for developers in the 2010s.

write about what you’re learning. share your mistakes. document your progress. build an audience by teaching what you’re figuring out.

the logic was solid: teaching forces clarity, public accountability drives consistency, and helping others builds reputation.

then AI changed the equation.

now the AI is doing a lot of the learning. and the building. so what does “building in public” mean when you’re orchestrating instead of typing?

what changed

before: I’m learning rust → I write a blog post about rust’s ownership model → people learn from my journey

now: I told Claude to build a rust CLI tool → it worked → uh… what do I share? that I wrote a good prompt?

the traditional build-in-public model assumed you were climbing the learning curve. the documentation of that climb was the content.

but when AI compresses weeks of learning into minutes of orchestration, there’s no climb to document. just “I asked for X, got Y, shipped it”.

that feels less shareable. less authentic. less valuable to others.

except it’s not. it’s just different.

the new build-in-public model

instead of documenting learning, you document:

decision-making
why did you choose this approach over that one? what constraints shaped the architecture? what tradeoffs did you make?

the AI can generate code. it can’t make strategic decisions. those are still yours. and they’re still worth sharing.

taste curation
when the AI gives you five options, how do you pick? what’s your aesthetic? your quality bar? your definition of “good enough”?

that’s taste. and taste is scarce. people are desperate for taste signals in an AI-generated world.

failure loops
the AI didn’t get it right on the first try. (it never does.) what went wrong? how did you course-correct? what did you learn about prompting, context engineering , or the problem space?

failure loops are the new learning curves.

integration challenges
the AI wrote perfect code for component A and component B. but they don’t work together. why? how did you fix it? what does that reveal about system design?

integration is where human judgment still dominates.

context setup
how did you structure the context so the AI could do good work? what’s in your workspace? what’s in your memory files? what patterns have you established?

this is the new “dev environment setup” blog post. except instead of vim plugins, it’s context architecture.

what people actually want to see

I run several AI agents. some 24/7 on a home server . I use them to write, code, research, deploy.

when I share that, people don’t ask “what prompts do you use”. they ask:

those are orchestration questions. not implementation questions.

and they’re more interesting than “here’s how I implemented feature X in javascript”.

the CHOP documentation pattern

Steve Yegge’s Chat-Oriented Programming is basically pair programming where one half of the pair is AI.

building in public with CHOP means documenting the conversation, not just the code.

not the literal transcript. (nobody wants to read 50 messages of back-and-forth.) but the shape of the conversation:

Ryan Florence does this well . he shows the React component the AI generated, but also the conversation that got there. the “no wait, make it do X instead” moments. the “actually let’s refactor that” pivots.

that’s the content. the human steering layer.

the honesty problem

there’s a weird social pressure to pretend you did more than you did.

someone builds a app with AI in a weekend. but their write-up makes it sound like they hand-coded everything. why? because “I prompted Claude to build this” feels like cheating.

it’s not cheating. it’s using the best tools available .

but there’s a status anxiety: if you admit the AI did most of the work, does that diminish your achievement?

short answer: no.

longer answer: the achievement is in the orchestration. in the vision. in the taste. in the integration. in the decision to build it at all.

the code is increasingly a commodity. the judgment behind the code is not.

what to share (practical guide)

if you’re building with AI and want to share the process:

1. show the brief
what did you tell the AI to build? how detailed was the spec? what did you leave open?

this sets the baseline. people can compare their own briefs and see where yours was more/less specific.

2. show the first failure
the AI misunderstood something or made an assumption. what was it? how did you catch it?

this teaches people what to watch for.

3. show a refinement loop
“it worked but wasn’t quite right” → how did you articulate what was wrong? what did you change?

this teaches taste calibration.

4. show the context files
what’s in your workspace that made this possible? style guides, example code, project docs?

this teaches context engineering .

5. show what you did manually
AI did 80%. you did 20%. what was the 20%? why couldn’t the AI do it?

this teaches the boundaries of the tool.

6. show the cost and time
how much did this cost in API calls? how long did it take wall-clock time? how much of that was you vs the AI?

this teaches ROI. (people are paying $100+/month — they want to know if it’s worth it.)

the “receipts” culture

in the pre-AI era, showing your code on github was proof you built something.

now? people assume AI wrote the code. so github stars don’t mean as much.

the new “proof of work” is the context around the code:

basically: anything that shows human judgment, not just code output.

why this feels weird

for a generation of developers, identity was tied to “I can build X”.

now it’s “I can orchestrate AI to build X”.

that’s a smaller flex. or at least it feels smaller.

but here’s the thing: everyone can orchestrate AI to build X now. the differentiation is in what you choose to build, how you refine it, and how you integrate it into a larger system.

that’s product thinking, not code thinking.

and documenting product thinking in public is just as valuable as documenting code thinking used to be.

maybe more valuable, because fewer people can do it well.

the agent failure stories angle

one of the best forms of building in public: sharing when AI agents fail.

not in a “AI is bad” way. in a “here’s what I tried, here’s what broke, here’s what I learned” way.

because right now, the AI hype cycle is at “look what I built in 10 minutes”. that’s useful for showing what’s possible. but it’s not useful for learning what’s practical.

the practical lessons come from:

those stories are gold. they teach people how to avoid the same traps.

what the “learn in public” pioneers would say

swyx (who coined the term) was always about learning exhaust. not polished tutorials. just public documentation of the messy process.

AI doesn’t change that. if anything, it makes it more important.

because the messy process now is:

all of that is learning. all of that is shareable.

the format changes. the principle doesn’t.

the economic angle

building in public was always also marketing.

you document your journey → people follow along → you build an audience → opportunities appear.

that still works with AI. maybe better, because you can build more in public.

traditional model: build one project over 3 months, document it, ship it.

AI model: build 10 projects over 3 months, document all of them (even the failures), ship the 3 that work.

more at-bats. more content. more surface area for opportunities.

Josh Pigford is doing this with Maybe Finance . building features with AI, sharing the process, growing the audience, validating the market.

that’s building in public in 2026.

the meta-problem

writing this article, I’m using AI.

Claude is drafting sections. I’m editing, restructuring, adding voice. it’s collaborative.

do I mention that in the article? (yes, I just did.) does that make the article less valuable? (you tell me.)

this is the meta-problem of building in public with AI: every piece of “building in public” content is itself potentially AI-assisted. which means every article has to answer “how much of this was you vs the AI?”

my answer: I don’t care about the percentage. I care about the judgment.

if the ideas are good, the structure makes sense, the voice is consistent, and it’s useful to readers — does it matter what tool generated the first draft?

where this goes

in 5 years, “I built this with AI” will be as unremarkable as “I built this with an IDE”.

of course you used AI. why wouldn’t you?

the interesting question will be “what did you build and why”.

building in public will focus on:

the code will be assumed. the craft will be in everything else.

and documenting that craft — the human layer on top of AI capability — that’s the new learning in public.


I keep daily logs in memory/YYYY-MM-DD.md. nothing fancy. just what I built, what worked, what didn’t, what I learned.

some days it’s “Claude generated a perfect script, shipped in 10 minutes”. some days it’s “spent 3 hours debugging why the agent kept making the same mistake”.

both are worth documenting. both teach something.

the act of writing it down makes me think more clearly. the fact that it’s public (or could be) makes me more honest.

that’s still the value of building in public. even when the AI does the building.

are you building with AI in public or in private? what stops you from sharing the process? what would you want to see from others?


Ray Svitla
stay evolving 🐌