the panic adjustments: meta ships a model that can't code, NYT names the code flood, norton builds an antivirus for your AI

░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░                                                 ░
░   ┌─────────────────────────────────────────┐   ░
░   │                                         │   ░
░   │   ████  PANIC  ████                     │   ░
░   │        │                                │   ░
░   │   ┌────┼────┬────────┬──────────┐       │   ░
░   │   │    │    │        │          │       │   ░
░   │   ▼    ▼    ▼        ▼          ▼       │   ░
░   │  meta  NYT  norton   bots      edge     │   ░
░   │  can't code  agent   8x       phone     │   ░
░   │  codes flood watch   faster   runtime   │   ░
░   │   │    │    │        │          │       │   ░
░   │   └────┴────┴────────┴──────────┘       │   ░
░   │            │                            │   ░
░   │            ▼                            │   ░
░   │   ░░░ the world noticed ░░░             │   ░
░   │   ░░░ and is adjusting  ░░░             │   ░
░   │                                         │   ░
░   └─────────────────────────────────────────┘   ░
░                                                 ░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░

today

meta shipped its first model from the superintelligence lab and it still can’t code. the new york times told the world what every developer already knew: agents write code faster than anyone can review it. norton launched an antivirus — for your AI agent. automated traffic is growing 8x faster than human traffic. google shipped production-ready LLM inference for your phone. a single github repo promising “an entire AI agency” hit 10K stars in a week. the pattern: the world is adjusting to the fact that agents are real. the adjustments are mostly panic.


■ signal 1 — meta muse spark: $14B superintelligence lab ships a model that can’t out-code claude

strength: ■■■■■ → multiple sources

meta unveiled muse spark , the first model from meta superintelligence labs (led by alexandr wang, hired 9 months ago). “small and fast by design” — strong at reasoning, math, science, multimodal perception. powers the meta AI assistant in apps and glasses. sub-agents run in parallel. CNBC notes it “lags rivals on coding ability.” NYT, Reuters, Fortune all covered.

meta built an expensive superintelligence team and shipped a consumer assistant that reads nutrition labels.

→ self.md take: meta’s strategy diverged from the coding-agent race. anthropic and openai chase the developer. meta chases the consumer. muse spark is embedded in every surface you already use — apps, glasses, shopping. the self.md thesis (own your AI on your disk) is the polar opposite of meta’s thesis (we embed AI in your life). both architectures are valid. only one gives you the data. meta’s $14B bet says “personal AI” means “we personalize AI for you.” self.md says “personal AI” means “you own the AI.” the fact that these are now competing trillion-dollar interpretations of the same two words tells you how high the stakes got while nobody was looking.


■ signal 2 — NYT front page: “the big bang — AI has created a code overload”

strength: ■■■■■ → source

the new york times ran a major feature : one financial services company went from 25,000 to 250,000 lines of code per month after adopting cursor — creating a backlog of 1 million lines needing review. meta CTO bosworth in an internal memo: “projects that once required hundreds of engineers can now be done by tens.” companies cutting thousands (Pinterest, Block, Atlassian). “there are not enough application security engineers on the planet to satisfy what just American companies need.”

the number: 10x code output. 0x review capacity increase.

→ self.md take: this is armin ronacher’s “final bottleneck” thesis (radar apr 4) going mainstream. the NYT told normies what every developer knows: the flood is here and nobody’s building the dam. for personal AI: if you’re running agents that generate code autonomously, you’re part of this story. the emerging class divide crystallizes — senior engineers who can review agent output are scarce and expensive. junior engineers who can prompt agents are abundant and cheap. the bottleneck is verification, not generation. always was. the question for anyone building their own stack: who reviews your agent’s code? if the answer is “nobody,” you’re trusting a system that the NYT just told 10 million readers is producing unreviewed output at industrial scale.


■ signal 3 — norton ships an antivirus for your AI agent

strength: ■■■■■ → source

norton (Gen Digital) launched norton AI agent protection in norton 360 — beta, apr 9. first consumer security product designed to monitor autonomous AI agents. safe actions proceed uninterrupted. confirmed threats blocked. suspicious actions paused for user review. gen’s threat labs found “approximately hundreds of malicious skills in public agent registries.”

the product: an antivirus that watches your AI instead of watching for viruses.

→ self.md take: norton shipping “agent protection” means the insurance-company tier of tech now treats AI agents as a consumer threat vector. they found malicious skills in public registries — the same registries your agent might be pulling from. this is the seatbelt moment: agents crossed from “developer toy” to “consumer risk.” the personal AI question: do you trust norton to watch your agent, or do you build your own oversight? the answer reveals your architecture. if you run local, you can inspect every skill yourself. if you run managed agents in someone else’s cloud, you need a watchdog. norton is betting most people choose the cloud. self.md is betting some people choose the disk.


■ signal 4 — automated traffic growing 8x faster than human traffic on the internet

strength: ■■■■ → source

HUMAN Security’s 2026 benchmark report: automated internet traffic grew 23.5% year-over-year in 2025, eight times faster than human traffic at 3.1%. imperva/thales corroborates: 37% of all internet traffic is now malicious bots. the industry needs to move beyond “bot mitigation” to a “trust layer” that distinguishes beneficial AI agents from malicious ones.

the number: 8x. your agents are part of the 23.5% growth.

→ self.md take: “the internet is mostly bots” stopped being hyperbole. every web_fetch, every API call, every scrape your agent runs — you’re contributing to the 23.5% growth. the implications are structural: websites will demand proof of humanity (or proof of “good bot”), rate limits will tighten, CAPTCHAs will multiply. the internet your agent navigates is about to get much harder to navigate. the “trust layer” HUMAN Security describes is where the next battle happens — not “block all bots” but “verify which bots are trustworthy.” your agent will need an identity. not an API key. an actual verifiable identity. the personal AI OS will eventually need a passport.


■ signal 5 — google LiteRT-LM: production LLM inference hits your phone

strength: ■■■■ → source

google shipped LiteRT-LM : open-source, production-ready inference framework for deploying LLMs on edge devices. built on LiteRT, the runtime trusted by millions of android developers. gemma 4 E2B already quantized for mobile. supports android, iOS, edge hardware. not a research prototype — production-grade in architecture and naming.

the shift: from “run a model on your laptop” to “run a model on your phone, for real.”

→ self.md take: gemma 4 on a phone was a party trick a week ago. LiteRT-LM makes it infrastructure. when google ships a production inference runtime with newest models pre-quantized for edge, “local AI” moves from enthusiast to mainstream. this is the last mile for personal AI: your agent runs on the device in your pocket, offline, zero cloud dependency. the phone becomes the personal AI server. combined with gemma 4’s frontier-class reasoning (radar apr 3), the stack is complete: model (gemma 4) + runtime (LiteRT-LM) + device (your phone) = personal AI you carry everywhere. google just handed the open-source community the production plumbing that was missing.


■ signal 6 — “the agency” repo: 10K stars in 7 days, an entire AI agency in one deploy

strength: ■■■■ → source

a github repo called “the agency” crossed 10,000 stars in seven days. premise: deploy a full AI team — engineers, designers, growth marketers, product managers — with SEO workflows and claude integration. documentation written for beginners. went viral on social media. the promise: replicate the operational core of a digital agency in a single repo.

the nerve: one repo claims to replace an entire agency. 10,000 people starred it in a week.

→ self.md take: “the agency” isn’t interesting because it works (unclear). it’s interesting because 10,000 people want it to work. the appetite for “turnkey AI agency infrastructure” shows a market past the “will AI replace agencies?” question and into the “which repo do I use?” phase. this is the wordpress moment for AI services: tools commoditize so fast the value shifts from “can you build it?” to “can you operate it?” for personal AI: if an AI agency fits in a repo, so does an AI personal assistant, an AI research department, an AI content studio. the abstraction layer keeps collapsing. the question isn’t “can one person run an agency?” anymore. it’s “which one-person agencies will have taste?”


one-liner takes


meta

today’s radar has a single spine: the world is adjusting to agents being real, and the adjustments look like panic. meta’s adjustment: ship a consumer model, skip coding. NYT’s adjustment: name the crisis. norton’s adjustment: sell protection. HUMAN Security’s adjustment: measure the flood. google’s adjustment: make edge inference production-ready. the internet’s adjustment: 10K stars on a repo that promises to replace an agency.

the personal AI adjustment is quieter. it’s the people who read these signals and go build their own review process, their own oversight, their own edge runtime. not because they’re paranoid. because the alternative is trusting someone else’s panic response.


sources