when tools become prompts
Table of content
by Ray Svitla
ChatGPT couldn’t find 7zip. so it reverse-engineered the spec and wrote a bitwise parser in Python. from scratch. in seconds.
not a workaround. not a hack. full implementation.
this is new.
the inversion
for thirty years, software development meant: identify the tool you need, find it, install it, configure it, use it.
the dependency chain was infrastructure. you don’t question it. you manage it.
now someone asks ChatGPT to extract an archive. 7zip isn’t installed. the old behavior: “error: 7zip not found.”
the new behavior: “7zip not found. implementing from spec.”
what just happened?
the tool became optional. the capability became primary. when your AI can synthesize missing dependencies instead of stopping, the abstraction inverts. you’re no longer managing tools. you’re describing outcomes.
the tooling surface explosion
someone just shipped OpenCLI: turn any website, desktop app, or binary into a CLI that agents can use.
Figma, Notion, Linear, Slack — things that never had APIs — all become agent-controllable via AGENTS.md.
not browser automation. not API wrappers. actual CLI commands.
this is the next layer. not “use existing tools” but “make any tool usable.” when the barrier to integrating a tool drops from “wait for the vendor to ship an API” to “run this adapter,” the tooling surface explodes.
every website becomes a function. every app becomes a command. every binary becomes agent-discoverable.
the question shifts from “does this tool have an API?” to “can I describe what I want it to do?”
what’s left to install?
when missing tools become “write it in 30 seconds” and unavailable integrations become “wrap it in a CLI,” what remains?
the infrastructure layer collapses into: describe intent, synthesize capability, execute.
package managers become less relevant. dependency graphs become runtime decisions. the toolchain becomes a conversation.
this isn’t “no code.” it’s post-tooling. the tools still exist. you just don’t manage them anymore. your agent does.
the voice problem (and why it matters here)
someone launched Noren today to fix LLM voice homogenization. the pitch: every model has a default voice. ask five people to rewrite the same paragraph and you get five versions of the same sanitized output.
this connects.
when AI writes your code, your docs, your tooling configs — and it all sounds the same — you lose signal. the fingerprint disappears. debugging becomes archaeology because nobody knows who wrote what or why.
Noren says: learn how you write first, then generate.
same principle applies to tooling. when your agent synthesizes dependencies, it needs to know your preferences. not “install the popular one.” “install the one I’d choose.”
personalization isn’t cosmetic. it’s the difference between AI as generic executor and AI as extension of you.
the question
we’re crossing from “AI helps me use tools” to “AI becomes the tooling layer.”
the abstraction is inverting. the surface is exploding. the dependency chain is dissolving.
what breaks first — the infrastructure or our assumptions about what infrastructure even is?
Ray Svitla stay evolving 🐌