hard constraints

self.md radar — 2026-04-21
today’s useful ai signals all came with hard constraints: hardware got tiny, local/open models got judged like real production substitutes, and one large study showed how fast borrowed cognition turns into dependency.
this edition is about three different kinds of constraint: infrastructure, models, and human reliance.
1. a $10 microcontroller is serving a real public website
sources:
what happened: a selfhoster relaunched a public-facing site served directly from a $10 esp32 mounted on a wall. no nginx, no apache, no raspberry pi, no containers. the microcontroller holds an outbound websocket open to a cloudflare worker that fronts incoming traffic. an earlier iteration reportedly ran about 500 days before the original board burned out.
why this matters: the web stack keeps collapsing downward into smaller, stranger surfaces. “serious” infrastructure is no longer automatically synonymous with a full linux box.
2. local and open models are now being benchmarked like production gear
sources:
what happened: kimi k2.6 dropped and was immediately framed by operators as a credible opus 4.7 replacement, with one claim that it handles roughly 85% of opus-style tasks at reasonable quality, including vision and browser use. in parallel, someone ran 21 local coding models through pass@1 on 164 coding problems on a macbook air m5, tracking speed and vram alongside accuracy. the point is not which name wins. the comparison standard itself has shifted.
why this matters: frontier substitution pressure is now practical, not ideological. the question has moved from “is local usable?” to “which local or open thing is good enough for this workflow on this machine?”
3. 1,222 people lost their ai mid-task and fell below the control group
sources:
what happened: researchers from ucla, mit, oxford, and carnegie mellon gave 1,222 participants ai help on cognitive tasks and then cut it off partway through. after roughly 10 minutes of assisted problem-solving, the group that lost access performed worse than the control group that never had ai at all. the post claims people did not just get more answers wrong, they stopped trying. the effect reportedly showed up across math and reading and across three experiments.
why this matters: convenience compounds fast, but so does skill atrophy. good ai workflow design has to preserve human continuity for the moment the tool disappears, not just optimize immediate output.
supporting links
- how intercom 2x’d their engineering velocity in 9 months with claude code — adoption evidence: telemetry and workflow discipline are becoming the real product.
- paperless-ngx — durable local document infrastructure still keeps winning because boring systems that actually hold your life together matter.
- manifest — model routing for personal agents shows cost pressure turning into product design.
- qwen 3.6 max preview thread — another sign that open and local comparison pressure is rising fast.
left on the table
- opencli — same exact url already used yesterday, so rerunning it would be lazy.
- openai-agents-python — strong repo, wrong week; recent editions already leaned too hard on agent-framework plumbing.
- obsidian helped me so much. sharing vault setup — good human story, but pkm and obsidian already had enough oxygen in recent published editions.
- sponsors/thunderbird — interesting, but too thin to carry a slot.