hard constraints

signals self-hosting local-models ai-workflows

hard constraints


self.md radar — 2026-04-21

today’s useful ai signals all came with hard constraints: hardware got tiny, local/open models got judged like real production substitutes, and one large study showed how fast borrowed cognition turns into dependency.

this edition is about three different kinds of constraint: infrastructure, models, and human reliance.

1. a $10 microcontroller is serving a real public website

sources:

what happened: a selfhoster relaunched a public-facing site served directly from a $10 esp32 mounted on a wall. no nginx, no apache, no raspberry pi, no containers. the microcontroller holds an outbound websocket open to a cloudflare worker that fronts incoming traffic. an earlier iteration reportedly ran about 500 days before the original board burned out.

why this matters: the web stack keeps collapsing downward into smaller, stranger surfaces. “serious” infrastructure is no longer automatically synonymous with a full linux box.

2. local and open models are now being benchmarked like production gear

sources:

what happened: kimi k2.6 dropped and was immediately framed by operators as a credible opus 4.7 replacement, with one claim that it handles roughly 85% of opus-style tasks at reasonable quality, including vision and browser use. in parallel, someone ran 21 local coding models through pass@1 on 164 coding problems on a macbook air m5, tracking speed and vram alongside accuracy. the point is not which name wins. the comparison standard itself has shifted.

why this matters: frontier substitution pressure is now practical, not ideological. the question has moved from “is local usable?” to “which local or open thing is good enough for this workflow on this machine?”

3. 1,222 people lost their ai mid-task and fell below the control group

sources:

what happened: researchers from ucla, mit, oxford, and carnegie mellon gave 1,222 participants ai help on cognitive tasks and then cut it off partway through. after roughly 10 minutes of assisted problem-solving, the group that lost access performed worse than the control group that never had ai at all. the post claims people did not just get more answers wrong, they stopped trying. the effect reportedly showed up across math and reading and across three experiments.

why this matters: convenience compounds fast, but so does skill atrophy. good ai workflow design has to preserve human continuity for the moment the tool disappears, not just optimize immediate output.

left on the table