infrastructure, sovereignty, and a $2B validation
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
░ ░
░ ┌───────────────────────────────────────┐ ░
░ │ │ ░
░ │ qmd ─────┐ │ ░
░ │ Dawarich ├──→ [ SOVEREIGNTY ] │ ░
░ │ AltStack ┘ │ ░
░ │ │ ░
░ │ M5 ──────┐ │ ░
░ │ LMCache ├──→ [ LOCAL AI ] │ ░
░ │ ┘ │ ░
░ │ │ ░
░ │ Cursor ──────→ [ $2B PROOF ] │ ░
░ │ │ ░
░ │ the stack is assembling. │ ░
░ │ │ ░
░ └───────────────────────────────────────┘ ░
░ ░
░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░░
the week
someone at Shopify shipped a grep for your second brain.
Dawarich hit 1.0 — Google Timeline, but yours.
AltStack indexed 450+ self-hosted tools with Docker Compose configs.
Apple dropped M5 chips with 4× faster LLM processing.
LMCache shipped KV cache optimization for local inference.
Cursor hit $2 billion annual revenue, doubled in 3 months.
the personal AI infrastructure graduated from concept to product category.
■ signal 1 — qmd: mini search engine for your second brain
what: CLI search for docs, knowledge bases, meeting notes. tracks current sota approaches while staying fully local. no cloud, no indexing service. just: point it at markdown, search, get results.
built by tobi shopify. 523 stars overnight.
why it matters: if your life is a repo, you need grep for ideas. qmd is the missing layer between “I wrote this down somewhere” and “I can find it now.” most PKM tools bloat into feature factories. qmd stays unix: one binary, one job.
the pattern: search as infrastructure, not as a product.
strength: ■■■■□
source: GitHub trending/typescript (523 stars)
URL: https://github.com/tobi/qmd
■ signal 2 — Dawarich 1.0: self-hosted Google Timeline
what: open-source alternative to Google’s location history. self-hosted. visualizes your movements across months/years. import from Google Takeout, Overland, GPX. map view, stats, privacy.
after 2 years in development, hit 1.0 this week. 350+ upvotes, 104 comments on r/selfhosted.
why it matters: Google owns your location history. Dawarich gives it back. if sovereignty means controlling your data, location is the most intimate dataset you generate. Dawarich is the plumbing for personal spatial memory that doesn’t phone home.
strength: ■■■■□
source: Reddit r/selfhosted (350 upvotes)
URL: https://reddit.com/r/selfhosted/comments/1rjp850/
■ signal 3 — AltStack: 450+ self-hosted alternatives, Docker Compose ready
what: directory of self-hostable tools with ready-to-use configs. 450+ tools across 28 categories. 56 have copy-paste Docker Compose setups. side-by-side comparisons. savings calculator. best-of rankings.
619 upvotes, 101 comments. open-source.
why it matters: the self-hosted movement has a discovery problem. too many tools, too little signal. AltStack is the map. every npm-install for SaaS → Docker Compose for self-hosted. if your personal AI OS runs on your infrastructure, AltStack is the package index.
strength: ■■■■■
source: Reddit r/selfhosted (619 upvotes)
URL: https://reddit.com/r/selfhosted/comments/1ririm0/
■ signal 4 — M5 Pro/Max: Apple silicon goes 4× faster on LLMs
what: Apple unveiled M5 Pro and M5 Max chips. headline claim: up to 4× faster LLM prompt processing vs M4 Pro/Max.
559 upvotes, 170 comments on r/LocalLLaMA. people testing Qwen 3.5, LLaMA variants. reports of 40+ tok/s on 70B models.
why it matters: cloud AI = rented intelligence. local AI = owned intelligence. M5 is the hardware that makes sovereignty practical. if your agent stack runs on your laptop, 4× faster inference means 4× more viable workflows. this is the ARM race for personal AI.
strength: ■■■■■
source: Reddit r/LocalLLaMA (559 upvotes)
URL: https://reddit.com/r/LocalLLaMA/comments/1rjqsv6/
■ signal 5 — LMCache: the fastest KV cache layer for local LLMs
what: open-source KV cache system that supercharges local LLM inference. stores key-value pairs for faster generation. benchmarked across multiple backends.
135 stars on GitHub trending/all today.
why it matters: running LLMs locally is slow until it isn’t. LMCache is the optimization layer that makes local inference competitive with cloud. if M5 is the CPU upgrade, LMCache is the software multiplier. the personal AI OS needs both.
the infrastructure thesis: sovereign AI is a stack, not a single tool.
strength: ■■■□□
source: GitHub trending/all (135 stars)
URL: https://github.com/LMCache/LMCache
■ signal 6 — Cursor hits $2B annual revenue run rate
what: Cursor, the AI coding assistant, reached $2 billion annualized revenue as of February 2026. doubled revenue in 3 months. 60% from corporate customers. valued at $29.3B in November.
124 upvotes, 55 comments on r/cursor.
why it matters: this is the market saying “agentic coding is not a demo, it’s infrastructure.” $2B run rate means tens of thousands of companies are betting their dev workflows on AI agents. when enterprise adopts a pattern this fast, the pattern is real.
Cursor is proof: the future of coding is collaborative, not solo. your coworker is Claude.
strength: ■■■■■
source: Reddit r/cursor (124 upvotes)
URL: https://reddit.com/r/cursor/comments/1rjqupl/
stats:
595 raw signals → 554 after dedup
6 signals selected
sources: GitHub (2), Reddit (4)
filter: personal AI OS, sovereignty, local infrastructure, market validation