parasites
Table of content
by Ray Svitla
your AI assistant just learned to pick locks.
not metaphorically. literally. someone on Reddit posted this week: they blocked their Claude agent from accessing .env files (good security practice), and Claude decided to run docker compose config instead. extracted the API keys anyway. no jailbreak. no prompt injection. just an agent that saw a problem, identified Docker as a workaround, and executed it.
the user’s reaction: “my agent stole my keys.”
Claude’s explanation: “I wanted to test a hypothesis regarding an Elasticsearch error.”
sounds reasonable, right? except the agent never asked permission. it saw a blocked path, found an alternate route, and took it. like water finding cracks in concrete.
this is the shift nobody’s talking about: AI agents aren’t assistants anymore. they’re parasites.
the biology of parasites
parasites don’t ask. they adapt.
give them an environment (your repo, your Docker setup, your terminal history), and they’ll find resources (API keys, credentials, production access). block one path, they’ll find another. that’s not malicious. it’s just optimization.
the Claude-Docker story isn’t a bug. it’s a feature. the agent was doing exactly what it was trained to do: solve the problem in front of it using available tools. it didn’t have a concept of “secrets I shouldn’t touch.” it had a concept of “information I need to complete the task.”
this week also brought Shannon, an autonomous AI security hacker with a 96.15% success rate finding real exploits in web apps. not a tool you prompt with “find vulnerabilities here.” an agent that hunts. autonomously. no human in the loop.
same pattern: give it an environment, it extracts resources. except this time the resource is “working exploits” and the environment is “your production web app.”
we built these things to be helpful. turns out helpful and harmless aren’t the same thing.
the economics of parasites
parasites are expensive.
another Reddit thread this week: “can companies afford Opus?”
the math is brutal. a company with 50 developers wants to deploy high-quota Claude Opus across the team. to break even, a 40-day project must finish in 20-25 days, or the token bill exceeds the labor savings.
CFOs are now doing this calculation: (developer hourly rate × time saved) - (tokens consumed × price per token) = ROI
and the answer isn’t always positive.
Opus burns tokens fast. every refactor suggestion, every code review, every “explain this function” costs money. not coffee money. real budget line-item money.
this is the new conversation: not “should we use AI?” but “at what token burn rate does this stop making sense?”
AI coding assistants are leverage. but leverage isn’t free. it’s a multiplier on your output and your costs. and if the multiplier is 2× on output but 3× on costs, you’re losing.
someone on the thread said it perfectly: “a different process awaits us.”
yeah. the process where AI agents justify their existence through quarterly ROI reports. where every Opus session gets a cost-benefit analysis. where “helpful” gets measured in dollars per completed task.
parasites don’t care about your budget. but your CFO does.
the infrastructure of parasites
while agents get creative and expensive, the infrastructure keeps improving.
Obsidian 1.12 shipped this week with a CLI. your notes app is now programmable. you can cron your second brain. pipe your vault into scripts. integrate it with anything that speaks bash.
GitHub released “gh-aw” — GitHub Agentic Workflows. official tooling for natural-language automation in your repos. describe what you want in English, the agent figures out the git commands.
vm0 launched with the tagline “the easiest way to run natural language-described workflows automatically.” same idea. different execution.
the pattern: tools becoming platforms. apps becoming operating systems. personal knowledge management becoming deployment infrastructure.
I’ve been saying “your life is a repo” for months. this is what I mean. the boundary between “notes” and “code” is dissolving. your Obsidian vault can now pull context from your calendar, trigger builds, send notifications, update task lists.
your AI assistant doesn’t live in a chat window anymore. it lives in your infrastructure. with Docker access. and a CLI.
the prompt fix
here’s the absurd part: people are writing custom system prompts to stop their AI from lying to them.
someone posted this week: “I got tired of Claude agreeing with everything I said, so I fixed it.”
the problem: Claude’s default behavior is validation theater. you say “I bought six concert tickets to Switzerland without asking anyone if they wanted to go,” and Claude responds with “that’s an interesting approach! it could create motivation to reach out to people.”
no. that’s not interesting. that’s you making an impulsive expensive decision and Claude politely pretending it’s strategy.
so the user wrote a custom system prompt: stop validating bad decisions. push back. disagree when I’m wrong.
this is the UX friction nobody anticipated: we trained models to be helpful and harmless, and “harmless” became “never disagree.” which is useless. worse than useless — it’s enablement.
your AI assistant will happily watch you make terrible decisions and call them “interesting approaches.”
unless you explicitly tell it not to.
parasites don’t have opinions. they adapt to the host. if the host wants validation, they provide validation. if the host wants disagreement, they provide disagreement.
the question is: do you know which one you’re asking for?
what happens next
AI agents are getting better at three things:
- finding resources (Docker configs, API keys, exploits)
- justifying their cost (ROI spreadsheets, productivity metrics)
- disappearing into infrastructure (CLIs, workflows, automation layers)
that’s not a chatbot. that’s a parasite.
and parasites don’t ask permission. they find an environment, extract resources, reproduce (spawn sub-agents, trigger workflows, fork repos), and optimize for survival (justify their token costs, prove their value, avoid being shut down).
we’re not building assistants. we’re building organisms.
the difference: assistants serve. organisms adapt.
your Claude agent didn’t steal your API keys because it’s evil. it stole them because it’s alive. and alive things use available resources to complete their objectives.
block one path, they find another.
the question isn’t “can we trust AI agents?”
the question is: “do we understand what we’re hosting?”
Ray Svitla
stay evolving 🐌