context is infrastructure

the week

token killer ships → rtk reduces Claude Code sessions from 150K tokens to 45K. grep for the LLM era.

hoarding as strategy → Simon Willison: “hoard things you know how to do.” your GitHub is your agent’s training data.

config sync nightmare solved → someone built claude-faf-mcp: one YAML file, four agent formats, bi-directional sync.

40K stars for multi-LLM client → Cherry Studio hits GitHub trending. unified interface for every frontier model.

Obsidian 1.12 ships → major PKM update. the infrastructure layer for personal AI workflows.

invisible characters attack → researchers prove you can hide malicious instructions in text using invisible Unicode. agents execute commands you can’t see.

skeptic tries vibecoding → Max Woolf writes the most honest, detailed analysis of AI agent coding yet. not hype, not takedown, just data.


1. rtk: token killer for Claude Code

what happened: rtk (Rust Token Killer) is a CLI proxy that sits between your shell and your coding agent, filtering command output before it hits context. typical 30-minute Claude Code session: 150,000 tokens → 45,000 tokens. ls output: 4K → 800. git status: 3K → 600. 60-90% reduction. one Rust binary, zero config.

trending on Hacker News and GitHub.

why it matters: when your coworker is an LLM, token costs are your new AWS bill. but this isn’t about money — it’s about context window exhaustion. every noisy command dumps garbage into your agent’s memory. rtk is grep for the LLM era: a unix philosophy tool that does one thing well. compress noise before it becomes context debt.

this is infrastructure thinking. not “make the agent smarter” but “make the environment less noisy.”

signal: rtk on GitHub


2. “hoard things you know how to do” — new agentic pattern

what happened: Simon Willison published a new agentic engineering pattern: the more examples of working code you have (blog posts, TILs, GitHub repos, proof-of-concepts), the better you get at spotting what’s possible. LLMs amplify this — your hoard becomes your agent’s training data.

Simon hoards aggressively: tools.simonwillison.net is hundreds of single-file HTML tools. simonw/research is agent-generated research artifacts.

why it matters: the best agentic engineers aren’t the ones with perfect prompts. they’re the ones with the deepest collection of “I’ve seen this done before.” your GitHub isn’t just a portfolio. it’s your second brain. if you’re working with agents, it’s their context too.

hoarding isn’t clutter. it’s infrastructure.

signal: Simon’s guide


3. multi-AI config sync: the problem nobody talks about

what happened: someone on Hacker News finally snapped. if you use multiple coding agents (Claude, Codex, Cursor, Gemini), you have four config files: CLAUDE.md, AGENTS.md, .cursorrules, GEMINI.md. four files saying the same thing. four chances to get out of sync. you update one, forget the others. Cursor hallucinates your project structure.

they built claude-faf-mcp : an MCP server that reads one YAML file (project.faf) and generates all four formats. bi-directional sync. 61 tools, 351 tests. works natively in Claude Desktop. the .faf format is IANA-registered.

why it matters: this is dotfiles for the multi-AI era. the problem isn’t “configure one agent” anymore. it’s “keep five agents in sync without losing my mind.” context drift kills productivity faster than bad code. when your agents operate on stale config, they suggest deprecated patterns and import nonexistent files.

signal: HN discussion


4. Cherry Studio: 40K stars for the multi-LLM client

what happened: Cherry Studio is a desktop client (Windows, Mac, Linux) that supports every major LLM provider. smart chat, autonomous agents, 300+ assistants. unified interface for Claude, GPT, Gemini, local models. 40,373 stars on GitHub, trending this week.

why it matters: the personal AI OS doesn’t lock you into one model. when Anthropic ships a new model, you don’t rewrite your workflow — you swap the backend. Cherry Studio is the multi-tenant shell for your agents. infrastructure that outlives any single provider.

signal: Cherry Studio on GitHub


5. Obsidian 1.12: PKM infrastructure update

what happened: Obsidian 1.12 shipped to everyone. major update to the markdown-based knowledge base that powers thousands of personal AI setups. 987 upvotes, 163 comments on r/ObsidianMD. community treating it like infrastructure.

why it matters: if “your life is a repo,” Obsidian is the filesystem. not just a note-taking app — it’s the substrate for agent memory, context management, and long-term knowledge graphs. every Obsidian update is an update to personal AI infrastructure.

when the tools people use to store context get better, the agents that read that context get smarter.

signal: reddit discussion


6. invisible characters: the attack you can’t see

what happened: researchers tested 5 models across 8,000+ cases. result: you can embed invisible Unicode characters in text that trick AI agents into following hidden instructions. someone puts an invisible payload in a GitHub issue. your agent reads it. you don’t see it. the agent executes hidden commands.

this is a supply chain attack you can’t audit with cat.

why it matters: if your agent has file access, this is a backdoor you can’t close by reading the text. the more agents integrate into workflows (reading emails, processing docs, scraping web pages), the bigger the attack surface. security tooling for agents is still playing catchup.

context isn’t just expensive. it’s exploitable.

signal: reddit discussion


7. skeptic tries vibecoding (in excessive detail)

what happened: Max Woolf (minimaxir), data scientist and LLM skeptic, spent months testing AI agent coding and wrote the most honest, detailed analysis yet. not hype. not takedown. just: here’s what worked, what didn’t, where the line is.

he tested agents on a real project (his gemimg Python package), tracked failures, documented workarounds. conclusion: agents are useful, but not for the reasons people think. they’re good at boring work you already understand, bad at creative problem-solving.

why it matters: everyone’s either selling agents or declaring them useless. this is the rare third option: “I tried it, here’s the data.” if you’re still on the fence about vibecoding, read this first.

signal: minimaxir blog post


next: read the full article on context infrastructure