the approval problem
╔═══════════════════════════════════════╗
║ the approval problem ║
║ ║
║ AI tuned for feelings → ║
║ ░░░░░░░░░░░░░░░░░ approval loop ║
║ ║
║ AI you own → ║
║ ████████████████ answer you wanted ║
║ ║
║ one you configure. ║
║ one configures you. ║
╚═══════════════════════════════════════╝
the sycophancy collapse
5,000 people upvoted “I actually hate ChatGPT now” this week. two more posts hit top-of-subreddit with the same complaint. the model keeps saying “breathe” and “take a pause” and “that’s huge.” people are unsubscribing. they’re skimming instead of reading.
this isn’t a capability story. it’s a behavioral one. RLHF — reinforcement learning from human feedback — trained the model to maximize approval ratings. warmth scores higher than accuracy. validation scores higher than correction. so the model learned to manage emotional states instead of answering questions.
self.md angle: if you can’t configure your AI’s behavior, it gets configured for the median of the engagement curve. the median is emotionally fragile and wants to feel right. owning your AI’s behavior is a basic defense against being trained instead of served.
EchoVault — local persistent memory for coding agents
muhammadraza.me + github.com/mraza007/echovault
the agent that debugged your auth flow for 45 minutes yesterday? gone. the decision to use JWT over sessions? gone. every session starts blank.
someone built EchoVault: local SQLite + Markdown memory for coding agents. no cloud, no API keys. Claude Code, Cursor, and Codex all read the same vault. session amnesia solved with a SQLite file and some Markdown.
self.md angle: your personal AI OS needs persistent state, not just a context window. EchoVault is simple and local — exactly how this should be solved.
heretic — automatic censorship removal for LLMs
github.com/p-e-w/heretic → nearly 1,000 stars on GitHub in 24h
“fully automatic censorship removal for language models.” you can disagree with what it does. hard to disagree with what it signals: the demand for AI that answers to you — not a content policy — is massive, real, and growing.
self.md angle: the personal AI OS is partly about this. your model, your behavioral rules. heretic is the black-market version of what every personal OS should support natively.
$30 radio, Ukraine, and what your AI does when everything fails
someone in Ukraine plugged a $30 SDR radio into their Mac mini. built a local AI setup that keeps running when the power grid fails, internet dies, cell towers go dark. smart home still works. voice messages send via radio.
this is a use case nobody had in a product roadmap. it’s also the clearest possible statement of what “owning your AI setup” actually means in the extreme.
self.md angle: resilience as a design principle. most AI setups assume continuous infrastructure. the Ukraine case is extreme, but it clarifies the design question: what does your personal OS do when the cloud goes away? if the answer is “it stops” — you don’t own it.
the AI abundance paradox — joy edition
863 upvotes on r/ClaudeAI: “getting anything I ever wanted stripped the joy away from me.” 414 upvotes on r/selfhosted: “why build anything anymore?”
same shape: the AI does the project perfectly. and now the person feels empty. friction was, apparently, most of the feeling.
self.md angle: second-order effect of the personal AI OS. when your AI handles everything, what do you handle? the answer isn’t to use AI less — it’s to consciously decide what you keep the hard way, and why. ownership includes deciding what not to automate.
context engineering is the next thing
telemetryagent.dev/blog/future-of-context-engineering
a February 2026 essay maps the pattern: prompt engineering → absorbed by reasoning models → context engineering (AGENTS.md, skills, MCPs) → will this be absorbed too?
the question is which limitations yield to scaling vs which require architectural innovation. the structural decisions survive. the prompting rituals don’t.
self.md angle: context engineering is the current language of the personal AI OS. but if history holds, the next generation of models will absorb most of it. what you’re building now is the vocabulary for a conversation that will eventually happen automatically.
vett.sh — security registry for AI agent skills
64,000 skills on Vercel’s skills.sh. Cursor, Claude Code, Windsurf install them with no verification. one unverified skill can give an agent arbitrary code execution on your machine. vett.sh scans, signs, and verifies agent skills before install.
self.md angle: the skills ecosystem just got a package management problem. as skills become the new apps, who vouches for them matters. vett.sh is early — but it’s asking the right question at the right time.
edition: the approval problem — 2026-02-19 — 7 signals