recursion is shipping. vibe coding is collapsing.
Table of content
by Ray Svitla
the recursion timeline just shortened
buried in a Time magazine cover story this week: Anthropic’s models now write 70-90% of the code used to develop future versions of themselves. not boilerplate. not tests. the actual training loops.
Jared Kaplan, Anthropic’s chief science officer, estimates fully automated AI research could arrive within a year.
this isn’t a prediction anymore. it’s a shipping timeline.
most people missed it because it wasn’t announced. Time discovered it. recursive self-improvement doesn’t arrive with a press release. it ships in git commits.
the production survivor’s guide
meanwhile, an ex-backend lead at Manus (the AI agent startup Meta acquired) spent this week dropping production wisdom on r/LocalLLaMA: function calling is a trap.
2 years. millions of users. every “official” pattern broke under load.
what worked: structured output parsing + retry loops.
the lesson isn’t “function calling is bad.” the lesson is: vendor docs optimize for demos, not deployment. if you’re building personal AI infrastructure, the gap between “works in the tutorial” and “works in production” is a minefield.
his open-source projects (Pinix, agent-clip) are the scar tissue turned into tooling. not theory. field-tested patterns that survived contact with reality.
when vibe-coded infrastructure implodes
on the other end of the spectrum: BookLore.
self-hosted book management app. mostly AI-generated codebase. developer merging 20K-line PRs daily. nobody understands how it works. v2.0 shipped this week with crashes, data loss, UI bugs that require hard refresh to fix.
r/selfhosted is calling it out: “these are the kinds of bugs you get when nobody actually understands the codebase they’re shipping.”
BookLore is the cautionary tale everyone saw coming but nobody wanted to write. vibe coding works until it doesn’t. speed without comprehension is just accelerated failure.
the production rule nobody teaches: if you can’t debug it, you don’t own it.
the infrastructure split
here’s the pattern emerging:
camp 1: recursion builders
Anthropic. models training models. automation stacks that improve themselves. infrastructure that doesn’t need you to get smarter.
camp 2: vibe shippers
BookLore and every repo like it. AI-generated code merged faster than anyone can review. velocity as a metric. comprehension as optional.
camp 3: production survivors
the Manus engineer. Simon Willison. people building personal AI infrastructure who’ve seen enough failures to know what scales. structured outputs over magic. retry loops over prayer.
the gap between camp 1 and camp 2 is widening. camp 1 is building systems that understand themselves. camp 2 is building systems nobody understands.
camp 3 is building systems they can fix.
what this means for personal AI
if you’re building a personal AI OS — whether it’s OpenClaw, CoWork, Gaia, or something you hacked together in bash — you’re navigating this split.
the recursion timeline says: AI will improve itself faster than humans can track. the vibe coding collapse says: speed without understanding creates undebuggable systems. the production survivor’s wisdom says: ignore the docs, watch what scales.
three competing pressures:
- automate everything → camp 1
- ship fast → camp 2
- understand deeply → camp 3
you can’t optimize for all three. pick two.
the second-order question
if claude writes 90% of Anthropic’s training code, who writes the other 10%?
that 10% is steering. that 10% is the human-in-the-loop deciding what “better” means.
if your personal AI is recursively improving (via claude-reflect, skill evolution, memory distillation), who defines “improvement”?
when your agent learns from you, it’s learning your preferences. but preferences drift. goals shift. you in 2026 might want different things than you in 2027.
recursive systems need anchors. if the anchor is “make Ray more productive,” the agent optimizes for that. if the anchor drifts, the recursion drifts.
the camp 1 question nobody’s answering: what happens when the 10% stops being enough?
the camp 3 bet
the production survivors aren’t betting on full automation. they’re betting on debuggable automation.
structured outputs instead of magic function calls. retry loops instead of “it should work.” SQLite memory instead of vector embeddings. file trees instead of databases.
the bet: when things break (and they will), you want infrastructure you can inspect, fork, and fix.
camp 2 collapses when things break because nobody can debug what they don’t understand.
camp 1 might not break. or it might break in ways we can’t comprehend.
camp 3 breaks predictably. and that’s the feature, not the bug.
where we are
recursion is here. Anthropic’s models write their own training code. the timeline to full automation is ~1 year.
vibe coding is collapsing. BookLore is just the first visible implosion. more will follow.
production wisdom says: speed is a trap if you can’t debug the output.
if you’re building personal AI infrastructure, the question isn’t “which camp wins?” the question is: which failure mode can you survive?
camp 1: systems smarter than you.
camp 2: systems nobody understands.
camp 3: systems you can fix.
pick your poison. the recursion is shipping either way.
Ray Svitla
stay evolving 🐌