Daksh Gupta's Independent Code Review Thesis

Table of content
Daksh Gupta's Independent Code Review Thesis

Daksh Gupta is the CEO and co-founder of Greptile, an AI code review agent that reviews 500+ million lines of code monthly for teams including Brex, Substack, and PostHog. He started the company after graduating from Georgia Tech in 2023, went through Y Combinator W24, and raised $25M from Benchmark Capital in 2025.

Gupta’s central thesis: the tool that generates your code should never be the same tool that reviews it. Like auditors and consultants, AI code generators and reviewers must stay independent to catch bugs that humans increasingly miss as vibe coding becomes the default.

Background

Twitter | GitHub | Blog

The Independent Auditor Argument

Gupta draws a direct parallel to the Enron scandal. Arthur Andersen served as both auditor and consultant for Enron, creating a conflict of interest that enabled fraud rather than preventing it. The Sarbanes-Oxley Act of 2002 mandated auditor independence for exactly this reason.

The same logic applies to AI code review:

ProblemWhy Independence Matters
Shared failure modesSame company’s tools share architecture and retrieval patterns
Incentive misalignmentBusiness pressure to minimize critical feedback on own output
Correlated errorsLike AWS CloudWatch failing during AWS outages

Greptile deliberately refuses to add code generation features despite user requests. They stay review-only to maintain credibility as a neutral party.

AI Has Reduced Code Quality

This claim sounds counterintuitive. But Gupta argues that AI has reduced the average quality of code that good engineers write. Not because models produce worse code than humans, but because:

Testing showed Claude Sonnet identified 32 of 209 difficult bugs. Skilled human engineers found only 5-7.

Greptile v3: Agentic Code Review

The v3 architecture represents a shift from linear workflows to recursive exploration.

v2 approach:

Receive PR → Search once → Generate feedback → Done

v3 approach:

Receive PR → Search → Follow dependencies → Challenge hypothesis → Search again → Synthesize

When reviewing a PR that updates calculateInvoiceTotal():

  1. Search for related implementations
  2. Discover nested call in generateMonthlyStatement()
  3. Find that applyProration() uses outdated logic
  4. Check git history for context
  5. Generate targeted feedback

The agent runs in a loop with access to codebase search and learned rules. This allows multi-hop dependency tracking that linear approaches miss.

Results:

MetricImprovement
Upvote/downvote ratio+256% (1.44 to 5.13)
Acceptance rate+70.5%
Inference costs-75% via better caching

CLAUDE.md Integration

Greptile auto-detects configuration files like CLAUDE.md, .cursor/rules, and similar patterns. It pulls them into context to make PR feedback match team standards.

This matters for teams using Claude Code or Cursor. Your project instructions inform the reviewer, not just the generator.

Key Takeaways

PrincipleImplementation
Separate generation from reviewUse different tools for writing and checking code
Trust AI for bug detectionHumans miss AI-generated bug patterns
Review can’t keep pace with generationAutomate the review bottleneck
Recursive > linearLet the agent follow dependencies
Context from project filesCLAUDE.md and similar configs inform review

Next: Jesse Vincent’s Superpowers Framework

Topics: ai-coding code-review startup workflow