stateless agents: why your AI shouldn't remember you

Table of content

by Ray Svitla


stripe’s engineering blog dropped something that broke the dominant AI agent narrative. they built coding agents that spawn, complete a task, and terminate. no memory. no context accumulation. no “relationship” with the developer.

agents as functions, not colleagues.

most frameworks go the opposite direction. OpenClaw, Cowork-OS, Gaia — they all optimize for persistence. your agent remembers your preferences, learns your patterns, builds a mental model of you over time. the pitch is continuity: it gets better the more you use it.

stripe’s “minions” (their term) are the opposite. one-shot. ephemeral. disposable.

the trade-off nobody talks about

if your agent remembers everything, it owns your context.

sounds dramatic, but think about it. your AI assistant knows:

that knowledge is power. but who owns it?

if it’s stored in Anthropic’s servers, they own your workflow fingerprint. if it’s in a local vector database, you own it — but you’re responsible for securing it, backing it up, versioning it.

stateless agents sidestep this entirely. they don’t remember you because they don’t need to. the task definition contains everything required. the output is self-contained. the agent terminates.

why stripe went stateless

stripe’s use case is specific: automated code changes in a massive monorepo. thousands of engineers. millions of lines of code. hundreds of services.

in that environment, a persistent agent is a liability:

stateless agents solve all four problems. each invocation is isolated. no accumulated privileges. full audit log. failures don’t compound.

the personal OS dilemma

but here’s where it gets interesting for personal AI. your use case is NOT stripe’s use case.

you’re not managing a monorepo with hundreds of collaborators. you’re managing your life. your notes. your tasks. your projects. your knowledge base.

in that context, continuity IS the feature. you WANT your agent to remember that you prefer markdown over rich text, that you track todos in a specific format, that you have a recurring meeting every tuesday at 2pm.

so why consider stateless at all?

three reasons stateless might win anyway

1. composability

stateless agents are functions. functions compose.

you can chain them: agent A generates a draft → agent B edits for clarity → agent C formats for publication. each step is deterministic. you can swap any component. you can replay the chain with different parameters.

stateful agents don’t compose cleanly. their output depends on hidden state. “run this again” doesn’t guarantee the same result.

2. transparency

with stateful agents, you never quite know why it suggested X instead of Y. was it a pattern it learned last week? something in your conversation history? a preference you mentioned once and forgot?

stateless agents are debuggable. input → logic → output. if it’s wrong, you fix the input or the logic. no archaeology required.

3. sovereignty

this one’s philosophical but crucial. if your agent is stateful, where does “you” end and “it” begin?

your preferences. your habits. your style. are they yours, or are they the agent’s model of you? and if the model diverges from reality, which one is “correct”?

stateless agents keep the boundary clean. you provide context explicitly. the agent processes it. it doesn’t accumulate a representation of you that might drift over time.

hybrid approach: context as infrastructure

here’s the synthesis nobody’s building yet: stateless agents + explicit context layers.

instead of the agent remembering your preferences, you maintain a preferences repo. markdown files. version controlled. human-readable.

~/context/
  preferences/
    code-style.md
    communication-tone.md
    project-structure.md
  data/
    contacts.json
    calendar.ics
    tasks.md
  history/
    2026-02-23.md

when you invoke an agent, you pass the relevant context files as input. the agent is stateless, but the task isn’t context-free. the difference: YOU own and curate the context, not the agent.

this is what stripe does at scale. their minions don’t remember the codebase. they’re given a snapshot of the relevant parts. the context is versioned, reviewed, and controlled separately from the agent logic.

what this means for your personal os

if you’re building your personal AI infrastructure, the stateless-vs-stateful choice isn’t binary. it’s architectural.

questions to ask:

stripe’s minions prove that stateless scales. but scaling isn’t your problem. your problem is living with this system every day.

the answer might be: stateless agents + stateful context infrastructure. the agent forgets. the context persists. you control the boundary.


Ray Svitla
stay evolving 🐌