custody layer

self.md radar — 2026-04-24
capability keeps shipping faster than its custody layer.
three fronts where custody got sharper today: OpenAI kept GPT-5.5 behind paid surfaces before the API, Washington reframed distillation as strategic theft, and a compromised Bitwarden CLI went straight for AI assistant and MCP secrets.
1. GPT-5.5 launches on the subscription side first
sources:
what happened: OpenAI shipped GPT-5.5 into Codex and rolled it out to paid ChatGPT subscribers while explicitly holding back API access for additional safety and security work. Lenny’s hands-on says GPT-5.5 Pro is priced at $180 per million output tokens and already reshaped his Codex workflow. The interesting delta is not the benchmark — it is the order of release.
why this matters: When the frontier lands first in logged-in product surfaces and only later in raw API, the custody layer starts dictating who gets what capability. That is a different distribution model than the last two years trained anyone to expect.
2. Washington names distillation as model theft
sources:
what happened: The White House OSTP memo puts adversarial distillation into national-security language and commits to deeper private-sector engagement to counter what it describes as industrial-scale foreign campaigns to copy U.S. models. The framing is explicit: distillation as theft, not as research method. Agencies are told to treat it as an active front.
why this matters: This pulls distillation out of the forum-argument register and into policy surface area, where export controls, disclosure rules, and procurement language actually move. Model access terms are about to start reading like dual-use technology law.
3. Bitwarden’s compromised CLI treated AI configs like first-class loot
sources:
what happened:
The official @bitwarden/cli@2026.4.0 npm release was backdoored with a staged credential stealer. StepSecurity’s teardown shows the payload pulling SSH keys, cloud creds, GitHub tokens, and shell history, then explicitly probing AI assistant configs — Claude Code, Cursor, Kiro, Codex CLI, Aider, ~/.claude.json, and MCP configs. GitHub tokens were the pivot into CI/CD.
why this matters: Attackers now treat AI coding tool configs as standard loot alongside SSH and cloud creds, which means every local agent install is part of the blast radius of any dev-tool supply chain hit. MCP config hygiene just became production security.
supporting links
- DeepSeek V4 — the open-weight counterprogram to GPT-5.5’s gated rollout
- ml-intern — model-building itself is getting packaged as an agent surface
- Shannon — autonomous pentesting is part of the same trust-layer squeeze
- skills — open distribution for agent capabilities keeps solidifying
left on the table
- Qwen 3.6 27B agency gains — same model family and same open-model lane as yesterday’s published edition
- oh-my-pi — real repo, but the terminal-agent / tool-harness story is already crowded in the last week of published pages
- Obsidian gaming backlog — same capture / PKM bucket as recent coverage, too small for lead treatment
- Claude Code post-mortem thread — useful housekeeping, but not enough delta beyond the existing Claude quality-report cycle