runtime surfaces

self.md radar — 2026-04-23
Local AI got more practical on three surfaces at once today: code, voice, and the notes layer that feeds both. Qwen shipped a dense 27B that is outperforming its own giant on coding, Qwen TTS is being reported as real-time and expressive on personal hardware, and Obsidian’s Web Clipper quietly improved how saved pages actually survive. The through-line is a personal stack that owns more of its own runtime.
Lead is the Qwen 3.6 27B release as the cleanest hard delta of the day. Voice follows because local capability is no longer a text-only story. The Clipper update closes it out: capture quality is the unglamorous input that decides whether any of the above is worth feeding.
1. Qwen 3.6 27B, dense and punching up
sources:
what happened: Qwen released a 27B dense model today. In its release materials and early community testing, it is reportedly beating Qwen’s own 397B-class sibling on coding benchmarks, which is the part driving the local-ops reaction. A 27B that lands on a single serious GPU and holds its own against a house-sized MoE is a real shape change for what fits on a workstation.
why this matters: Dense reliability on hardware people actually own counts for more than parameter theater. Local coding assistants just got a credible default.
2. Qwen TTS, running locally and sounding like it means it
sources:
what happened: A community writeup today shows Qwen TTS running in real time on local hardware, with expressive output unusual for an open voice model. The notable shift is not the raw audio score but that open voice is crossing the threshold from cloud-only demo into something you can actually wire into a personal stack.
why this matters: Local-first stops being text-only. The stack is learning to speak without renting somebody else’s runtime.
3. Web Clipper keeps the source cleaner
sources:
what happened: Obsidian’s Web Clipper now lets you manage highlights directly and stay in Reader mode when you follow links out of a clipped page. Small change, but it fixes the two places clipped context usually rots: cluttered highlight state and losing the readable view on the next hop.
why this matters: Personal AI systems are downstream of capture quality. Cleaner intake beats re-explaining the same page to a model twice a week.
supporting links
- Open WebUI — local control plane energy keeps rising.
- claude-context — codebase context is being packaged as something portable across agents.
- Qwen/Qwen3.6-27B — the practical artifact behind the release thread.
left on the table
- mnfst/manifest — exact same repo already ran on 2026-04-20, so this is a hard dedup reject no matter how the thread is trending.
- Claude Pro no longer lists Claude Code as included and Anthropic bans organizations without warning — both real deltas, but this week is already saturated with provider-drama and pricing led yesterday.
- Mozilla used Anthropic’s Mythos to find and fix 271 Firefox bugs — strong number, already anchored yesterday’s edition.
- The new ChatGPT images model as the new photorealism standard — same image-generation wave carried yesterday, not worth a second pass.