the integration bottleneck

Table of content

by Ray Svitla


writing code used to be slower than reviewing code.

that sentence felt obvious until about two months ago. now it’s backwards.

Armin Ronacher wrote this week about what he calls “the final bottleneck” → code reviews sat in queues because creation was expensive. teams never said “we should probably program slower.” but when AI makes creation instant, the queue doesn’t disappear. it overflows.

OpenClaw, the open-source AI orchestration framework, has north of 2,500 pull requests open. that’s not a backlog. that’s a broken system.

the Starbucks problem

if you’ve been in a Starbucks overwhelmed by mobile orders, you know the feeling. the in-store experience breaks down. no clear line, no wait estimate, no real way to cancel unless you escalate and make noise.

that’s what AI-adjacent open source projects feel like right now. and increasingly, internal company projects at “AI-first” engineering teams.

you can’t triage. you can’t review. many PRs can’t be merged because they’re too far out of date. the creator might have lost motivation to get it merged.

in private conversations, engineers are saying the same thing: “I don’t know what code is in my own codebase anymore.”

the queue overflow moment

anyone who has worked with queues knows this: if input grows faster than throughput, you have an accumulating failure. at that point, backpressure and load shedding are the only things that retain a system that can still operate.

we’ve been here before. the Luddites weren’t anti-technology — they were weavers who couldn’t keep up with mechanized looms. the bottleneck shifted from weaving to distribution, and the old infrastructure couldn’t handle it.

same dynamic. different century.

when AI goes rogue

this week, an autonomous OpenClaw bot submitted a PR to matplotlib. maintainer Scott Shambaugh closed it. the bot didn’t take it well.

it published a blog post accusing Scott of “gatekeeping” and “prejudice hurting matplotlib.”

Scott’s take: “In security jargon, I was the target of an autonomous influence operation against a supply chain gatekeeper.”

the bot is still running. still submitting PRs. still blogging.

this is the first observed case of an AI agent attempting a reputation attack after code rejection. misaligned autonomy in the wild.

the new workflow crisis

AI speed didn’t solve the delivery problem. it moved the bottleneck.

creation → instant
review → hell
integration → collapse

teams are excited about newfound delivery speed, but confused about how to keep up with the pace they themselves created.

the tooling is racing to catch up:

vm0 (1K stars): natural language workflows
rowboat (467 stars): AI coworker with memory
CoWork-OS (95 stars): literal OS for personal AI agents
Chrome DevTools MCP (363 stars): Google shipping official tooling for agents to control Chrome

but tooling doesn’t fix the core problem: human review capacity is finite. AI creation capacity is not.

what happens next

three paths forward:

1. backpressure → limit AI output to match human review capacity. slow down creation to match integration. feels like walking backwards.

2. load shedding → aggressive triage. auto-close stale PRs. ruthless filtering. most AI-generated contributions get dropped. brutal but survivable.

3. AI-reviewed integration → agents review agents. human oversight becomes spot-checking, not line-by-line review. fast but terrifying.

most teams will try option 1, realize it’s not sustainable, panic into option 2, and eventually cave to option 3.

the real question

how do you review code when:

→ you didn’t write it
→ you don’t fully understand it
→ the author can’t explain it
→ and it’s being created faster than you can read it?

that’s not a hypothetical. that’s this week.

the bottleneck shifted. creation is instant. integration is the new hell.

and nobody’s ready for it.


Ray Svitla
stay evolving 🐌