MCP Security: Why Nobody Audits AI Agent Permissions
┌──────────────────────┐
│ trust │ control │
├──────────────────────┤
│ ? │ ✓ │
│ magic │ files │
│ agent │ tools │
└──────────────────────┘
personal AI isn’t a trust problem — it’s a control problem. this week’s signals show developers choosing explicit control over magic convenience.
MCP Server Security: The Audit Nobody Does
someone built a risk analysis database for every public MCP server. turns out the thing that makes agents powerful — access to real tools — is also what makes them dangerous.
nobody’s reading the code before installing filesystem access or database MCPs. we trust the vibes. the gap: no good way to sandbox or audit what your AI can actually do. you either trust everything or trust nothing.
strength: ■■■■□
why it matters: personal AI needs a local-first security model. can’t rely on centralized trust when the agent has root access. see also: sandboxing security guide .
Markdown Files Beat AI Memory Features
developers are ditching built-in “memory” features in Claude and ChatGPT. instead: .md files they explicitly reference.
quote from HN : “full visibility into what’s in context. no mystery recalls from weeks ago. easier debugging when behavior drifts.”
the pattern: AGENTS.md , project notes, manual context management. more work but actually reliable. people want control over what the AI knows, not magic.
strength: ■■■□□
why it matters: reveals core UX problem. transparent context beats opaque memory. this is why portable identity matters.
Artifact Keeper: Self-Hosted Package Registry (MIT)
dev built an open-source alternative to Artifactory/Nexus in Rust. supports 45+ package formats. security scanning, SSO, replication — all MIT licensed. no enterprise tier, no feature gates, no surprise invoices.
built by someone who “keeps getting pulled into DevOps” and decided to automate the pain away.
strength: ■■■□□
why it matters: pattern emerging — personal infrastructure built by individuals, not bought from SaaS. same energy as self.md: own your stack.
AGENTS.md Maturity: Three Stages of AI Workflow
someone mapped three stages of AI workflow evolution:
- stage 1: static checklist (AGENTS.md with rules)
- stage 2: rules generated from failures (each bug → new rule)
- stage 3: AI proposes its own rules based on patterns (doesn’t exist yet)
quote: “stage 1 rots. nobody updates the checklist. stage 2 scales but feels like teaching a junior dev who never graduates. stage 3 is where we’re headed but isn’t here.”
strength: ■■■■□
why it matters: reveals the meta-problem — we’re still figuring out how to teach AI systems. self.md needs a learning layer.
Vibe Coding Backlash: From Hype to Skepticism
interesting pattern in HN comments: people either love AI coding or think it produces unmaintainable spaghetti. the term “vibe coding” itself is becoming pejorative.
those who succeed: explicit constraints (AGENTS.md, test harnesses, incremental tasks). those who fail: prompt and pray.
strength: ■■□□□
why it matters: signal that the discourse shifted from “does it work” to “how to make it reliable.” maturity phase. tools that solve reliability win.