recursion ships. vibe code collapses. the infrastructure splits.

░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░                                               ░
░   ┌───────────────────────────────────────┐   ░
░   │                                       │   ░
░   │        claude ───┐                    │   ░
░   │                  │                    │   ░
░   │        trains ───┼──→ [ ITSELF ]      │   ░
░   │                  │                    │   ░
░   │        claude ───┘                    │   ░
░   │                                       │   ░
░   │   70-90% of training code.            │   ░
░   │   recursion timeline: ~1 year.        │   ░
░   │                                       │   ░
░   │   meanwhile:                          │   ░
░   │   vibe-coded repos collapsing.        │   ░
░   │   production wisdom emerging.         │   ░
░   │                                       │   ░
░   └───────────────────────────────────────┘   ░
░                                               ░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

today

claude writes most of its own training code now. function calling might be the wrong abstraction. someone fine-tuned 425K agent trajectories into a 9B model. AI-generated codebases are collapsing faster than they ship. Alibaba wants your browser to speak JavaScript. fish-speech just became the cleanest SOTA voice you can self-host. the recursion is here. the infrastructure is splitting into camps. the vibe-coded repos are imploding.


■ signal 1 — 70-90% of anthropic’s training code is written by claude

strength: ■■■■■ → Time magazine , Reddit discussion

Time magazine cover story drops the recursion bomb: Anthropic’s models now write 70-90% of the code used to develop future models. model releases every few weeks, not months. Jared Kaplan (Chief Science Officer) estimates fully automated AI research could arrive within a year.

quote from the article: “Some 70% to 90% of the code used in developing future models is now written by Claude.”

this isn’t “AI helps with boilerplate.” this is AI writing the training loops that make better versions of itself.

→ self.md take: the inflection point everyone predicted just happened quietly. AI research shifted from human-led with AI assistance to AI-led with human oversight. that’s not a spectrum — that’s a phase transition. the recursion timeline is ~1 year. the 10% of code that’s still human? that’s steering. when that 10% isn’t enough anymore, we’ll know we crossed the threshold.


■ signal 2 — function calling is a trap (according to someone who shipped agents at scale)

strength: ■■■■■ → Reddit r/LocalLLaMA

ex-backend lead at Manus (acquired by Meta) shares 2 years of production agent failures. the thesis: function calling is the wrong abstraction. what works instead: structured output parsing + retry loops. the post details every production pattern that survived contact with users — and why the “official” way (tool calls) broke under load.

built two open-source projects from these lessons: Pinix (agent runtime) and agent-clip (practical agent).

the failure mode: models can’t reliably decide when to call tools vs when to respond. structured outputs + validation loops scale better.

→ self.md take: vendor docs optimize for demos, not deployment. when you’re building personal AI infrastructure, the gap between “works in the tutorial” and “works in production” is a minefield. this is field-tested wisdom from millions of user sessions. if Anthropic, OpenAI, and Google all push function calling, but production survivors say structured outputs work better — listen to the survivors.


■ signal 3 — page-agent: your browser now speaks JavaScript to AI

strength: ■■■■□ → GitHub

Alibaba shipped page-agent: in-page GUI agent that controls web interfaces with natural language. the twist: it doesn’t simulate clicks. it executes JavaScript directly in the page context. tell it what you want, it writes the code, runs it, gets the result. 1,205 stars on GitHub trending.

the abstraction: web automation isn’t about DOM selectors anymore. it’s about executable intent.

→ self.md take: most browser agents (Playwright, Puppeteer, Selenium) work by simulating user actions. page-agent says: why pretend to be human when you can just run JavaScript? when your agent has eval() access to the page, every interaction is programmable. this is the “skip the UI, talk to the substrate” pattern applied to browsers. the paradigm shift: from “automate the interface” to “execute against the runtime.”


■ signal 4 — OmniCoder-9B: 425K agent trajectories, distilled

strength: ■■■■□ → Reddit r/LocalLLaMA

first serious community-built agentic coding model. 9B parameters, fine-tuned on 425,000 curated agent coding trajectories. built by Tesslate, based on Qwen3.5-9B. the training data: real Claude sessions doing software engineering, tool use, terminal operations, multi-step reasoning. not synthetic. captured behavior.

early reports: “slaps in Opencode,” “replaced my cloud setup,” “actually works on 8GB VRAM.”

→ self.md take: every agentic coding tool is a cloud API or a 70B+ model. OmniCoder-9B is the first serious attempt at distilling agentic behavior into a model you can run locally. 425K trajectories isn’t a toy dataset — it’s production capture. if local-first coding agents are real, this is the proof of concept. the milestone: agentic coding crossed the 8GB VRAM threshold.


■ signal 5 — booklore: when vibe-coded infrastructure collapses

strength: ■■■□□ → Reddit r/selfhosted

PSA from r/selfhosted: BookLore (self-hosted book management app) is mostly AI-generated, and it’s imploding. v2.0 shipped with crashes, data loss, UI requiring hard refresh to show changes. dev merging 20K-line PRs daily, each one bolting on new features without understanding the codebase.

top comment: “these are the kinds of bugs you get when nobody actually understands the codebase they’re shipping.”

→ self.md take: vibe coding works until it doesn’t. BookLore is the cautionary tale: fast iteration, no comprehension, eventual collapse. when your infrastructure is AI-generated and nobody can debug it, you’re not building on sand — you’re building on vibes. speed without understanding is just accelerated failure. production rule: if you can’t debug it, you don’t own it.


■ signal 6 — fish-speech: cleanest SOTA TTS you can self-host

strength: ■■■■□ → GitHub

fish-speech hits GitHub trending with 637 stars. tagline: “SOTA Open Source TTS.” state-of-the-art voice synthesis, fully open, runs locally. production-ready, not a research demo.

the pattern: every proprietary TTS is cloud-locked (ElevenLabs, Play.ht, OpenAI). fish-speech is the first SOTA alternative you can run on your hardware.

→ self.md take: voice is the last interface to go local. fish-speech proves you can match cloud TTS quality without API calls. if your personal AI OS includes voice, this is the plumbing. sovereignty isn’t just about text — it’s about every modality. the milestone: local TTS crossed the “sounds as good as the paid version” threshold.


the infrastructure split

three camps are forming:

camp 1: recursion builders
Anthropic. models training models. automation stacks that improve themselves. infrastructure that doesn’t need you to get smarter.

camp 2: vibe shippers
BookLore and every repo like it. AI-generated code merged faster than anyone can review. velocity as a metric. comprehension as optional.

camp 3: production survivors
the Manus engineer. people building personal AI infrastructure who’ve seen enough failures to know what scales. structured outputs over magic. retry loops over prayer.

the gap between camp 1 and camp 2 is widening. camp 1 is building systems that understand themselves. camp 2 is building systems nobody understands. camp 3 is building systems they can fix.

if you’re building a personal AI OS — whether it’s OpenClaw, CoWork, Gaia, or something you hacked together in bash — you’re navigating this split.

three competing pressures:

you can’t optimize for all three. pick two.


where we are

recursion is here. Anthropic’s models write their own training code. the timeline to full automation is ~1 year.

vibe coding is collapsing. BookLore is just the first visible implosion. more will follow.

production wisdom says: speed is a trap if you can’t debug the output.

if you’re building personal AI infrastructure, the question isn’t “which camp wins?” the question is: which failure mode can you survive?

camp 1: systems smarter than you.
camp 2: systems nobody understands.
camp 3: systems you can fix.

pick your poison. the recursion is shipping either way.


→ deep dive: recursion is shipping. vibe coding is collapsing.