coding gets new control surfaces

self.md radar — 2026-04-17

today looked less like smarter chat and more like people renegotiating the interfaces around coding work: open models you can actually run locally, proxies that turn hostile software into inspectable APIs, and maintainers drawing a harder line around machine-written contributions.

lead with the open-weight model move that gives self-hosted operators a real option, then the interception layer reframing agent reliability, then the governance backlash inside open source.

1. qwen drops a coding model the local crowd can actually use

sources:

what happened: Qwen officially released Qwen3.6-35B-A3B on April 15 — a 35B-total, 3B-active sparse MoE with open weights. it ships with both thinking and non-thinking modes and targets agentic coding, posting benchmark numbers like SWE-bench Verified 73.4 and Terminal-Bench 2.0 51.5. weights are on Hugging Face, ready to download.

why this matters: the interesting part isn’t leaderboard placement — it’s that local and self-hosted operators now have a plausible coding-capable model to route around closed-provider pricing changes, rate limits, and access gates without waiting for permission.

2. kampala turns arbitrary apps into an API surface

sources:

what happened: kampala launched on HN as a Mac app that intercepts traffic from websites, mobile apps, and desktop apps to reverse-engineer their request flows. it claims full traffic interception, auth chain tracing, replay and export, and fingerprint preservation — pitched not as a scraping hack but as a way to see every request an app makes and replay stable workflows as dependable APIs.

why this matters: the move away from brittle browser-puppet agents toward full request-level interception gives operators a control surface that doesn’t break when a CSS class changes, and reframes reliability as seeing the wire rather than guessing the DOM.

3. SDL draws a line on machine-written pull requests

sources:

what happened: an SDL collaborator stated that if AI generated any code in a pull request, the PR will be closed without further discussion. the broader thread proposes mandatory AI-use disclosure on PRs, a commitment that maintainers won’t use AI to generate SDL code, required human review, and a three-month policy revisit cycle. the issue was opened April 9 but hit HN circulation today.

why this matters: the bottleneck is no longer whether AI can write code — it’s which projects will accept machine-written code and under what disclosure and review terms, and SDL is one of the first high-profile libraries to put that question into a concrete, enforceable policy draft.

left on the table