George Hotz on Tinygrad and AI as Compiler
Table of content
George Hotz has been hacking since he was a teenager—jailbreaking the iPhone, reverse engineering the PS3—but his current obsession might be his most ambitious: building an entire AI stack in under 20,000 lines of code.
The Tiny Philosophy
Tinygrad sits between PyTorch (ergonomics) and micrograd (simplicity). The project’s credo is brutal: “Every line must earn its keep. Prefer readability over cleverness. We believe that if carefully designed, 10 lines can have the impact of 1000.”
At 31,000+ GitHub stars and only 18,935 lines (excluding tests), tinygrad isn’t a toy. It outperforms PyTorch on many workloads. The goal is complete hardware sovereignty—training state-of-the-art models without depending on NVIDIA’s software stack.
Hotz frames this as following “the Elon process for software”: make the requirements less dumb, then delete everything you can.
AI Coding as Compilation
Where most tech founders hype AI coding tools, Hotz holds a contrarian view he articulated in a September 2025 blog post:
“The best model of a programming AI is a compiler. You give it a prompt, which is ’the code’, and it outputs a compiled version of that code.”
His criticisms are specific:
- English prompts aren’t precise for novel work
- AI workflows are highly non-deterministic
- Prompt changes have non-local effects on output
The kicker: “That anyone uses LLMs to code is a testament to just how bad tooling and languages are.”
But Hotz isn’t an AI skeptic. He praised Claude Opus 4.5 as “the first model I feel that can use computers at all” and his tinygrad repo includes both CLAUDE.md and AGENTS.md files—detailed guides for AI coding assistants working on the codebase.
Structured AI Collaboration
The tinygrad CLAUDE.md runs nearly 10,000 characters. It’s a masterclass in setting up AI-assisted development:
Architecture overview - explains the UOp pipeline, key concepts, how everything connects.
Workflow rules - “NEVER commit without explicit user approval. NEVER amend commits.”
Lessons learned - documents gotchas like “UOps are cached by their contents—creating a UOp with identical (op, dtype, src, arg) returns the same object.”
Testing commands - specific pytest invocations with environment variables like DEBUG=2, VIZ=1, SPEC=2.
The AGENTS.md opens with: “Hello agent. You are one of the most talented programmers of your generation.” Then immediately grounds expectations: “tinygrad is a tensor library focused on beauty and minimalism.”
This is the nuance missing from the hype cycle—using AI tools seriously while remaining clear-eyed about what they are.
The Deconstructed Company
Tiny Corp (the company behind tinygrad) mirrors the codebase philosophy. Hotz describes it as “a deconstructed company”:
- Six people
- Almost nothing private—just Discord and GitHub
- $2M revenue from selling hardware (tinybox)
- AMD contract negotiated publicly on Twitter
- Hiring happens through repo contributions
The mission: “commoditize the petaflop.”
Practical Takeaways
On AI-assisted development:
- Treat AI as a compiler with an English frontend
- Create detailed context files (
CLAUDE.md,AGENTS.md) for AI tools - Be explicit about workflow rules and constraints
- Document architectural decisions and gotchas
On minimalism:
- 98% of software lines are workarounds for other abstractions
- Focus on the actual goal, not on interfacing with other code
- “Only a fool begins by taping out a chip”—software sovereignty comes first
On building:
- Transparent operations reduce coordination overhead
- Hire through demonstrated contribution
- Build better tools instead of hyping worse ones
Links
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.