from prompts to context: the paradigm nobody asked for

Table of content

by Ray Svitla


remember when we all became prompt engineers?

that was 2023. everyone had their ChatGPT tips thread. “act as an expert in X”, “think step by step”, “you are a helpful assistant that”. we traded prompts like rare pokemon cards. some people built entire businesses selling prompt libraries.

turns out we were optimizing for the wrong variable.

the prompt theater

prompting was always theater. performance art where you pretended the AI needed the right incantation to work. like those old google search tricks — putting quotes around phrases, using site: operators. except google already knew what you meant. you were just playing a game.

with LLMs it’s worse because they do respond differently to phrasing. so the game has real stakes. but the game itself is still wrong.

here’s what actually matters: context.

context is everything you didn’t say

a prompt is what you type into the box. context is everything else:

most of that never touches the prompt box.

the best AI workflows I’ve seen don’t have better prompts. they have better context systems. they’ve built environments where the AI knows things without being told every time.

from script to environment

think about the difference between a shell script and a shell environment.

a script is linear. you write instructions top to bottom. every time you run it, same input → same output. that’s prompting. you craft the perfect sequence of words, save it, reuse it.

an environment is stateful. it remembers things. it has variables, aliases, functions. it has a working directory. when you run a command, it interprets it based on context. that’s what good AI usage looks like.

I built my Claude Code setup around this principle. the agent doesn’t start fresh every time. it has:

when I say “fix that bug”, it knows which bug. when I say “use the same style”, it knows which style. not because the prompt is clever. because the context is rich.

the economics of context vs prompts

prompts are cheap to share, expensive to maintain.

you can copy-paste a prompt from twitter. but does it work in your context? probably not exactly. so you tweak it. then you forget the tweaks. then you start over next week.

context systems are expensive to build, cheap to maintain.

building a good context system takes work upfront. you need to structure your workspace, keep memory files, establish patterns. but once it’s running, each interaction gets easier. the system learns your way of working.

this is why personal AI infrastructure matters more than prompt libraries. and why people paying $100+/month on AI aren’t just buying compute — they’re buying context that persists.

what context engineering looks like

practical examples from my daily workflow:

old way (prompting):
“write a blog post about X in the style of Y, using these references: [paste], following this structure: [paste], avoiding these phrases: [paste]”

new way (context):
“write the blog post about X”

because the style guide lives in memory/selfmd-content-style-2026-02-04.md. the references are already in context from yesterday’s research. the structure is implied from previous posts.

old way:
“debug this error [paste 50 lines], here’s the relevant code [paste 200 lines], here’s what I tried [paste history]”

new way:
“the API error from yesterday is back”

because the workspace maintains state. the agent remembers what “yesterday’s error” means. the code is already in context. the previous solution is in the daily log.

the skills model

AI skills are basically packaged context.

a Claude skill isn’t just a prompt template. it’s:

when people share skills on awesome-claude-skills , they’re sharing context systems, not just prompts.

why this matters now

two reasons this shift is happening in 2026:

1. longer context windows

when models had 4k tokens, you couldn’t afford rich context. you had to compress everything into the prompt. now with 200k+ context windows, you can load entire codebases, documentation, and conversation history. the bottleneck moved from context size to context quality.

2. persistent agents

agents that run continuously — like Claude Code in daemon mode — don’t reset between sessions. they can maintain state. that makes context systems practical where they used to be theoretical.

the anti-pattern: prompt hoarding

I know people with notion databases full of saved prompts. hundreds of them. organized by category. tagged. searchable.

they almost never use them.

because by the time you search for the right prompt, customize it for your current situation, and paste in your context, you could have just asked the question naturally and let a good context system handle it.

prompt hoarding is the AI equivalent of bookmarking articles you’ll never read. it feels productive. it’s actually procrastination.

what to build instead

if you’re serious about AI work, invest in:

workspace structure
create files the AI can always reference. style guides, preferences, common patterns. make them the source of truth.

memory systems
dated logs, project notes, decision records. not for you to read — for the AI to reference. I keep daily logs that the agent reads each morning.

feedback loops
when the AI gets something wrong, don’t just correct it in the moment. update the context files so it doesn’t make the same mistake twice.

environment over scripts
stop saving individual prompts. build environments where good prompts emerge naturally.

the paradox

here’s the weird part: good context engineering makes AI interactions feel less impressive.

when you nail a complex prompt and get exactly what you wanted, it feels like mastery. when you have a good context system and things just work, it feels like nothing happened.

but “nothing happened” is the goal. the best tools disappear.

where this breaks down

context engineering isn’t free:

privacy
more context means more data persisted. if you’re putting everything in context, you need to trust where that context lives. this is one reason people are running local models at home .

context rot
memory systems need maintenance. old context can become misleading context. I’ve had the agent reference outdated patterns from weeks ago. you need a strategy for context hygiene.

portability
a well-tuned context system is hard to move. prompts are portable — you can copy-paste them anywhere. context systems are environment-specific. that’s both a feature and a constraint.

the real competition

in 2024, companies competed on model quality. GPT-4 vs Claude vs Gemini.

in 2025, they competed on context window size. 200k! no, 1M! no, infinite!

in 2026, the competition is context tooling.

who makes it easiest to build, maintain, and share context systems? who solves the privacy problem? who handles context rot? who makes context portable?

the coding assistant shakeout is really about this. Cursor vs Windsurf vs Zed — they’re not competing on autocomplete quality. they’re competing on context systems.

what gets weird

once you start thinking in context engineering, you realize most “AI products” are just context wrappers.

notion AI? context wrapper around your workspace.
github copilot? context wrapper around your codebase.
perplexity? context wrapper around web search.

the product isn’t the model. it’s the context system around the model.

and if that’s true, the post-SaaS personal AI future is inevitable. because your best context system is the one that knows everything about your work. not just what’s in one app.


so here’s where we are: two years into the AI era, and we’re finally asking the right question. not “what’s the perfect prompt?” but “what’s the perfect context?”

the people who figured this out early — like the Claude Code community building workspace patterns, or travis maintaining skill libraries — they’re not prompt wizards. they’re context architects.

and context architecture is going to matter more than model architecture for most of what we actually do with AI.

have you shifted from prompt engineering to context engineering? what does your context system look like? what breaks when you try to share it?


Ray Svitla
stay evolving 🐌