the recursion is shipping
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░ ░
░ ┌───────────────────────────────────────┐ ░
░ │ │ ░
░ │ claude ──────┐ │ ░
░ │ │ │ ░
░ │ trains ──────┼──→ [ CLAUDE ] │ ░
░ │ │ │ ░
░ │ claude ──────┘ │ ░
░ │ │ ░
░ │ the recursion doesn't announce │ ░
░ │ itself. it ships in git commits. │ ░
░ │ │ ░
░ └───────────────────────────────────────┘ ░
░ ░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
today
claude writes most of its own training code now. function calling might be the wrong abstraction. someone fine-tuned 425K agent trajectories into a 9B model. AI-generated codebases are collapsing faster than they ship. Alibaba wants your browser to speak JavaScript. fish-speech just became the cleanest SOTA voice you can self-host. the recursion is here. the infrastructure is splitting into camps. the vibe-coded repos are imploding.
■ signal 1 — 70-90% of anthropic’s training code is written by claude
strength: ■■■■■
Time magazine cover story drops the recursion bomb: Anthropic’s models now write 70-90% of the code used to develop future models. model releases every few weeks, not months. Jared Kaplan (Chief Science Officer) estimates fully automated AI research could arrive within a year.
quote from the article: “Some 70% to 90% of the code used in developing future models is now written by Claude.”
this isn’t “AI helps with boilerplate.” this is AI writing the training loops that make better versions of itself. when your model can improve its own training infrastructure, the feedback loop changes shape. Anthropic didn’t announce this. Time discovered it. the singularity doesn’t arrive with a press release. it ships in git commits.
the inflection: AI research was human-led with AI assistance. now it’s AI-led with human oversight. that’s not a spectrum. that’s a phase transition.
→ Time magazine ● Reddit r/singularity (1,244 upvotes)
■ signal 2 — function calling is a trap
strength: ■■■■■
ex-backend lead at Manus (acquired by Meta) shares 2 years of production agent failures. the thesis: function calling is the wrong abstraction. what works instead: structured output parsing + retry loops. the post details every production pattern that survived contact with users — and why the “official” way (tool calls) broke under load.
function calling is how every vendor tells you to build agents. this engineer says it’s a trap. the failure mode: models can’t reliably decide when to call tools vs when to respond. structured outputs + validation loops scale better. if you’re building personal AI infrastructure, this is field-tested wisdom from someone who ran agents in production for millions of users.
the lesson: ignore the docs. watch what survives deployment.
→ Reddit r/LocalLLaMA (1,523 upvotes)
■ signal 3 — page-agent: your browser speaks JavaScript to AI
strength: ■■■■□
Alibaba shipped page-agent: in-page GUI agent that controls web interfaces with natural language. the twist: it doesn’t simulate clicks. it executes JavaScript directly in the page context. tell it what you want, it writes the code, runs it, gets the result.
most browser agents (Playwright, Puppeteer, Selenium) work by simulating user actions. page-agent says: why pretend to be human when you can just run JavaScript? when your agent has eval() access to the page, every interaction is programmable.
the paradigm: from “automate the interface” to “execute against the runtime.”
→ GitHub (1,205 stars)
■ signal 4 — OmniCoder-9B: 425K agent trajectories, distilled
strength: ■■■■□
first serious community-built agentic coding model. 9B parameters, fine-tuned on 425,000 curated agent coding trajectories. built by Tesslate, based on Qwen3.5-9B. the training data: real Claude sessions doing software engineering, tool use, terminal operations, multi-step reasoning. not synthetic. captured behavior.
every agentic coding tool is a cloud API or a 70B+ model. OmniCoder-9B is the first serious attempt at distilling agentic behavior into a model you can run locally. 425K trajectories isn’t a toy dataset — it’s production capture.
the milestone: agentic coding crossed the 8GB VRAM threshold.
→ Reddit r/LocalLLaMA (328 upvotes)
■ signal 5 — booklore: when vibe-coded infrastructure collapses
strength: ■■■□□
PSA from r/selfhosted: BookLore (self-hosted book management app) is mostly AI-generated, and it’s imploding. v2.0 shipped with crashes, data loss, UI requiring hard refresh to show changes. dev merging 20K-line PRs daily, each one bolting on new features without understanding the codebase.
top comment: “these are the kinds of bugs you get when nobody actually understands the codebase they’re shipping.”
vibe coding works until it doesn’t. when your infrastructure is AI-generated and nobody can debug it, you’re not building on sand — you’re building on vibes. this is the other side of the “agents ship fast” narrative.
the lesson: speed without understanding is just accelerated failure.
→ Reddit r/selfhosted (1,225 upvotes)
■ signal 6 — fish-speech: cleanest SOTA TTS you can self-host
strength: ■■■■□
fish-speech hits GitHub trending. SOTA open source TTS. production-ready, not a research demo. the pattern: every proprietary TTS is cloud-locked (ElevenLabs, Play.ht, OpenAI). fish-speech is the first SOTA alternative you can run on your hardware.
voice is the last interface to go local. fish-speech proves you can match cloud TTS quality without API calls. if your personal AI OS includes voice, this is the plumbing. sovereignty isn’t just about text — it’s about every modality.
the milestone: local TTS crossed the “sounds as good as the paid version” threshold.
→ GitHub (637 stars)
meta-pattern
the recursion is no longer theoretical. claude trains claude. agents distill into smaller agents. browser tools skip the UI layer entirely. and the infrastructure that ships fastest — the vibe-coded kind — is also the first to collapse.
two camps are forming: those who understand what they ship, and those who ship what they can’t understand. the first camp is slower. the second camp is louder. the question isn’t which wins. it’s which survives contact with users.
the personal AI stack isn’t just about tools anymore. it’s about comprehension. if you can’t debug it, you don’t own it.