infrastructure designed for agents: from retrofit to native
Table of content
by Ray Svitla
the retrofit era
for thirty years, infrastructure was built for human users first. databases had SQL because humans needed to query them. browsers had HTML + CSS + DOM because humans needed to see and interact with visual layouts. operating systems had file menus and windows because humans needed to click them.
when automation arrived, we retrofitted it. Selenium pretended to be a human clicking a browser. Cron jobs queried databases with SQL, the human interface language. APIs wrapped human UIs because the tools didn’t know any other way to work.
it was inefficient. a browser is a terrible interface for a non-visual agent. a DOM is not the right representation for automated interaction. SQL is not the natural way for an agent to think about information.
but retrofitting worked well enough. companies built empires on Selenium. automation became big. but it never felt native.
the agent-native shift
2024-2026 changed something. when Claude Code started writing code in production, when agents started shipping features, when multi-agent systems became the default—suddenly, infrastructure designed for humans was visibly inadequate.
a browser designed for humans has unnecessary abstraction layers: rendering, layout, event loops, click simulation. an agent doesn’t need any of that. it needs: state inspection, action execution, result parsing. Lightpanda looked at this problem and said: let’s build the browser agents actually need.
it’s not a browser with better APIs. it’s a browser without the human assumptions. faster, cleaner, more direct.
this happened in parallel in five different domains:
databases: vector databases exploded because traditional SQL was wrong for embeddings. then someone asked: what if we optimize for agent context patterns instead of semantic similarity? OpenViking says: agents think in hierarchies. context is organized by team → project → task → conversation. what if the database understood that natively?
planning: DeepAgents made planning a primitive, not a library function. most agent frameworks treat planning like an afterthought: “oh, agents sometimes need to think before acting.” DeepAgents says: planning is how agents work. build it in. make it first-class. the difference is profound—from “implement your own planner” to “planning is how you reason.”
memory: for years, people tried to bolt semantic search onto agents. vector databases + nearest-neighbor = memory. but agents don’t think in nearest-neighbor. they think in narratives, in hierarchies, in cause-and-effect chains. then someone realized: Obsidian vaults are perfect for this. markdown files are lightweight. bidirectional linking matches agent reasoning. you don’t need specialized infra. you need the right tool wired the right way.
hardware integration: legacy devices won’t be updated. a camera from 2016 won’t get a firmware upgrade. so rather than throw it away, someone taught their infrastructure to speak the camera’s language. 100,000 URL patterns reverse-engineered, learned, automated. agents as translators between old infrastructure and new.
queries: traditional debugging is human-centric: print statements, breakpoints, step through code. agents don’t debug that way. they read logs, pattern-match errors, hypothesize solutions. but most logging systems are optimized for human eyes. structured logs are better. unstructured better still if an agent is reading them.
why this matters
iteration speed: when infrastructure is designed for who uses it, you don’t waste cycles on adaptation layers. Lightpanda is faster than Playwright because it doesn’t need to simulate human behavior. agents get more done per unit of effort.
reliability: a retrofit is always a translation layer. translations fail silently. agent-native systems have fewer layers. fewer layers = fewer failure modes.
expressiveness: when you design for humans first, you’re stuck with human-friendly abstractions. humans like buttons and menus. agents like state machines and execution graphs. when your tool was built for agents, the expressive power is higher. you can do more complex things more simply.
sovereignty: this is the one nobody talks about. when your infrastructure is designed for agents, you understand it at the level agents understand it. you can reason about it. you can modify it. you can own it. human-designed infrastructure feels like magic—you call a function, something happens, you don’t fully understand why. agent-designed infrastructure is transparent. agents and humans can both reason about it.
the consolidation
what we’re watching now is infrastructure consolidation around agent-native design. it’s not that old tools die—Playwright still works. Postgres still works. It’s that new tools are being designed with agent users in mind from day one.
this creates a bifurcation. tools built for humans (with agents grafted on) vs tools built for agents (with human interfaces added). the agent-native tools are faster, simpler, more reliable. they’ll win the ecosystem.
the ones winning hardest are the ones that understood: agents don’t need to be human-like to be useful. a database doesn’t need a GUI. a browser doesn’t need to render pixels. a memory system doesn’t need to feel like a brain.
Lightpanda doesn’t have tabs. DeepAgents doesn’t have a visual planning interface. OpenViking doesn’t have a search button. they have what agents need, in the form agents need it.
what changes for you
if you’re building a personal AI system:
use the right tools: don’t force your agent into tools designed for humans. if you’re connecting Claude Code to a memory system, don’t use a vector database because it’s trendy. use what agents actually need: hierarchical context, narratives, causality.
think in agent terms: when you design your system, think “how would an agent understand this?” not “how would a human understand this?” the questions are different. agents think in hierarchies and chains. humans think in narratives and intuitions. build for the former.
embrace translation layers where they make sense: someone spent 2 years reverse-engineering camera protocols. they didn’t throw away the cameras. they built a translation layer. translation layers are expensive, but sometimes they’re cheaper than replacement. know when to translate, when to replace, when to push back on the vendor.
iterate on infrastructure, not features: the second-order effect of agent-native infrastructure is that you spend less time on plumbing and more time on your actual work. Obsidian → Claude Code is powerful because both tools were designed well. neither was designed for the other, but they fit together perfectly. find those fit, and let your agent iterate on top of them.
the question
here’s the deeper question: as infrastructure becomes agent-native, as agents become the primary design target, do humans disappear from the loop?
the answer is: not yet. most agent-native tools have human interfaces bolted on (because markets demand it). Lightpanda has a JavaScript API, but a human could read it. OpenViking has hierarchical context, but a human can understand hierarchies.
the real future is tools that are equally native to both. agents and humans as co-users, not hierarchy. infrastructure that makes sense to both, in their different ways.
that’s harder to design. but when it happens, that’s when things get really powerful.
Ray Svitla
stay evolving 🐌