Steve Yegge on Chat-Oriented Programming

Table of content

Steve Yegge has spent three decades watching technology hype cycles come and go. He worked at Amazon when web services were just a demo on a laptop. He was at Google when they invented transformers but did nothing with them. He’s seen enough to know when something is real.

LLMs, he believes, are the biggest shift since the World Wide Web itself.

From Skeptic to True Believer

Yegge joined Sourcegraph as Head of Engineering in late 2022, right when ChatGPT launched. Even at Sourcegraph — a company building AI coding tools — two-thirds of engineers polled were skeptical or “meh” about LLMs for coding.

Yegge wasn’t meh. He was watching a trillion-dollar volcano erupt.

“If you’re not pant-peeingly excited and worried about this yet, well… you should be.”

His internal polling revealed what he suspected: programmers who’d been burned by previous AI hype weren’t paying attention to what the new models could actually do. He decided to show rather than tell.

Chat Oriented Programming (CHOP)

In his 2023 essay “Cheating is All You Need,” Yegge demonstrated ChatGPT writing working Emacs Lisp code from a sloppy English prompt. The code ran correctly on first try.

But the real shift came in mid-2024 with GPT-4o. Suddenly the models could handle editing 1000-line source files without hallucinating, refactoring unexpectedly, or leaving placeholder comments.

Yegge coined a term for this new workflow: Chat Oriented Programming (CHOP).

The pattern is simple:

  1. Describe what you want to the LLM
  2. Get a draft that’s roughly 80% complete
  3. Review and tweak the remaining 20% by hand

That’s a 5x productivity gain from pure math. And it keeps getting better.

The Trust Argument is Dead

Yegge has little patience for programmers who say they “can’t trust” LLM-generated code:

“Can you trust code you yeeted over from Stack Overflow? NO! Can you trust code you copied from somewhere else in your code base? NO! Can you trust code you just now wrote carefully by hand, yourself? NOOOO!”

His point: software engineering exists because you can never trust code. That’s why we have reviewers, linters, tests, staging environments, and runbooks. The whole discipline assumes everything is broken until proven otherwise.

Adding one more source of untrustworthy code — a very fast, very cheap one — doesn’t change the fundamental equation.

The Death of Junior Developers

In June 2024, Yegge published “The death of the junior developer,” arguing that AI threatens entry-level positions across industries.

He shared a story about a 50-person law firm where senior partners realized they might not need junior associates anymore. LLMs produce research, briefs, and contracts that are “perfect” — with the catch that seniors must review everything. Same output, one human instead of two.

The same pattern applies to coding. Yegge and several colleagues now describe themselves as “reviewers” or “coaches” rather than programmers. They make the LLM do the work and review the output.

“Chat-first is the default, and writing by hand is our fallback plan. My quantum friend and I are both finding much less need for that fallback recently.”

Practical Implementation

Yegge’s current workflow at Sourcegraph:

Chat-first by default: Every coding task starts with describing it to an LLM. He uses Cody (Sourcegraph’s AI assistant) integrated into his IDE.

Emacs + IntelliJ hybrid: He switches between IntelliJ for Kotlin/Java work with AI completions, and Emacs for raw text manipulation when the IDE can’t keep up with fast typing.

Prompt craft as skill: Creating good prompts is becoming as important as writing good code. Gene Kim, his co-author on the “Vibe Coding” book, spends 45+ minutes crafting prompts for complex writing tasks.

Full context awareness: Modern coding assistants understand your entire codebase, build targets, and environment — not just the file you’re editing.

The Bigger Picture

Yegge’s career arc traces the evolution of developer tools:

His recent project “Gas Town” is what he calls a “vibe coding orchestrator” — built entirely through prompting without ever seeing the source code himself.

Key Takeaways

Stop waiting for permission: The models are good enough now. Engineers who adopt chat-first coding gain immediate productivity advantages.

Review everything: LLM output requires the same scrutiny as any other code. This isn’t new — it’s what engineering has always demanded.

Senior skills matter more: The ability to evaluate, correct, and guide AI-generated work becomes the core competency. Junior roles that existed purely to produce first drafts are at risk.

Context is everything: The best AI coding tools understand your full codebase. Local autocomplete is table stakes; real power comes from project-wide awareness.