the infrastructure layer: when your AI needs plumbing
Table of content
by Ray Svitla
something shifted in the last six months. we stopped asking “can AI write code?” and started asking “how do I run five AI agents at once without losing my mind?”
the chat interface was fine when AI was a toy. you type a question, it answers, you move on. but once you start delegating real work — once you have Claude Code running a refactor while Gemini CLI debugs a service while your local Qwen handles data cleanup — the chat box becomes a bottleneck.
you don’t need a better chatbot. you need infrastructure.
the shell layer
AionUi hit 17K stars overnight. it’s an open-source UI that wraps every major coding agent CLI: Claude Code, Gemini CLI, Codex, OpenCode, Qwen Code, Goose, Auggie. one interface. swappable backends.
think Chrome for coding agents. you don’t rewrite your workflow when Firefox ships a new engine. you swap tabs.
the innovation isn’t the UI. it’s the abstraction. AionUi treats agents as interchangeable runtimes. when Anthropic ships Opus 5, you don’t relearn a workflow — you change a dropdown.
this is what the personal AI OS looks like. composable tools. user-controlled infrastructure. zero lock-in.
the alternative is vendor hell. learn Claude’s interface, then Cursor’s, then Gemini’s. memorize hotkeys for each. copy prompts between tools. pray your context survives the transition.
AionUi says: no. one shell. many engines. you orchestrate. they execute.
the supervisor layer
deer-flow is ByteDance’s answer to a different problem: what happens when a task takes hours, not seconds?
it’s a “SuperAgent” framework. sandboxes, memories, tools, skills, subagents. one parent agent coordinates. the subagents solve. the parent synthesizes.
the architecture: you give deer-flow a task. it spawns subagents. each subagent has isolated memory, tool access, skill repertoire. the parent doesn’t solve the problem — it delegates, waits, stitches the results together.
this is the missing layer between “Claude solves one task” and “my AI handles my entire workflow.”
most agents are stateless. you start a session, they solve, they forget. deer-flow has memory. it remembers what it tried. what worked. what failed. across tasks. across sessions.
this is what happens when you take agent architecture seriously. not a chatbot. not a code generator. a supervisor.
the implication: your AI doesn’t need to be smarter. it needs to coordinate better.
the memory layer
Obsidian shipped a headless client last week. you can now sync your vault to a server without running the desktop app. CLI-friendly. automation-ready.
the use case: your server runs headless Obsidian, syncs your vault, feeds it to your agents. no GUI needed. your knowledge graph lives on your Pi, your VPS, your NAS. always synced. always accessible.
if your PKM is your agent’s memory, it needs to run everywhere.
most AI memory systems are proprietary. ChatGPT remembers things in OpenAI’s database. Claude remembers things in Anthropic’s cloud. you can’t inspect it. you can’t export it. you don’t own it.
headless Obsidian flips this. your notes are markdown files. your memory is local. your agent reads from your vault, writes to your vault, syncs via your infrastructure.
this is the plumbing for agent memory systems that span devices. your agent remembers what you told it on your laptop because it wrote it to your vault, which synced to your phone, which your next agent session reads.
no API. no vendor. just files.
the sovereignty layer
here’s the pattern: AionUi abstracts the interface. deer-flow abstracts the workflow. Obsidian abstracts the memory.
none of them lock you in. none of them extract rent. none of them own your data.
this is infrastructure thinking. you don’t build on sand. you build on layers you control.
the alternative is the app model. download an AI app. it works. it’s magic. then the company pivots, raises prices, shuts down, sells to a competitor. your workflow dies.
infrastructure doesn’t die. it gets forked.
what this means for you
if you’re still using AI as a chatbot, this doesn’t matter yet. but if you’re building a personal AI OS — if you’re delegating real work to agents, automating workflows, treating AI as infrastructure — you need these layers.
you need a shell that lets you swap models without rewriting prompts. you need a supervisor that coordinates multi-step tasks. you need memory that persists across sessions, devices, agents.
you need infrastructure.
the good news: it’s being built. AionUi, deer-flow, Obsidian headless. these aren’t products. they’re plumbing. boring, essential, user-controlled plumbing.
the personal AI OS isn’t one app. it’s a stack. and the stack is starting to solidify.
Ray Svitla
stay evolving 🐌