SaaS is Cooked: Why Explicit Context Wins in the AI Era
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░ ░
░ ┌───────────────────────────────┐ ░
░ │ UI LAYER │←──░── expendable
░ └───────────────────────────────┘ ░
░ ↓ ↓ ↓ ░
░ ╔═══════════════════════════════╗ ░
░ ║ YOUR CONTEXT ║←──░── value here
░ ╚═══════════════════════════════╝ ░
░ ░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
“we’re cooked” — a PM’s confession
a senior product manager at a major system-of-record SaaS company posted an anonymous confession on HN yesterday. 20 years in enterprise software. the message: “we’re cooked.”
the argument is structural, not about whether AI can one-shot your stack. it’s about the supply curve. AI coding is distorting the market such that:
→ previously healthy SaaS margins are shrinking
→ established moats have evaporated
→ investors don’t want to fund expansion in a dying market
the UI layer — the dashboards, workflows, buttons — is becoming the expendable middle. agents talk directly to data. the question isn’t whether this happens, but how fast.
the self.md take: if enterprise SaaS collapses toward commoditized agents, differentiation moves to your agent knowing your context. systems of record become systems of your record.
→ source
LLM memory is opaque — .md files are better
a developer on HN described why they abandoned built-in AI memory features for explicit .md files. their reasons:
→ full visibility into what’s in context
→ no mystery recalls from weeks ago
→ predictable token usage
→ easier debugging when behavior drifts
the downside is manual maintenance. but the tradeoff is reliability over magic.
the self.md take: this is the exact philosophy we’ve been building on. explicit context beats magic memory. files you own, not features you rent. your second brain should be inspectable.
→ source
null0 CLI — “AI clone” for personal context
a new open-source project launched: null0, a CLI that stores your personal context and injects it into Claude, Codex, Gemini.
the pitch: “I got tired of re-explaining myself to every AI session. my preferences, my tech stack, how I think, how I write — gone every time the context window resets.”
their goal: “create an actual AI clone that predicts your intent.”
the self.md take: direct competitor, same problem space. the interesting difference is framing. “AI clone” implies mimicry. “AI OS” implies extensibility. a clone is a portrait. an OS is a toolbox.
→ source
Monty — secure Python for AI agents
the Pydantic team is building Monty: a minimal, sandboxed Python interpreter specifically for AI code execution.
the problem: when an LLM runs code, you need to trust it won’t destroy your system. current solutions are either too permissive (risky) or too locked down (useless).
Monty aims for the middle: enough capability to be useful, strict enough sandboxing to be safe.
the self.md take: as personal AI becomes more agentic, sandbox execution becomes critical infrastructure. agent guardrails matter. watch for integration patterns as this matures.
agent-smith — auto-generate AGENTS.md
a CLI tool that scans codebases and generates AGENTS.md files — the emerging standard for AI coding assistants.
the pain point: “every. single. project. I kept manually writing AGENTS.md files to tell AI assistants about my components, API routes, and patterns.”
the tool scans your code and generates the context file automatically.
the self.md take: AGENTS.md is becoming the lingua franca of AI-to-code communication. worth considering how personal context (USER.md, IDENTITY.md) relates to project context (AGENTS.md). possible integration point.
→ source
emerging patterns
- explicit over implicit — the shift toward files-you-own vs cloud-magic-memory
- security for agents — sandboxing, permissions, trust boundaries — now a first-class concern
- SaaS middle layer squeeze — UI/workflow layer getting compressed as agents talk to data
- AGENTS.md standardization — the protocol for AI ↔ codebase communication is solidifying
- personal AI clones vs personal AI OS — competing framings for the same problem space
signals collected from HN, Reddit, Substack — 2026-02-07