Chris Pedregal's Invisible AI Notepad
Table of content

Chris Pedregal is a serial founder who has sold two companies to Google. He built Socratic, an AI homework tutor acquired in 2018, and Stack, a document scanner acquired by Google Drive in 2022. His current company Granola makes an AI notepad for meetings that has achieved 70%+ weekly user retention, rare for any consumer AI product.
His core insight: AI product moats come from UX execution, not model capabilities. Everyone runs on the same foundational models. What separates products people love from products that are just okay is how well you sweat the details.
Background
- CEO and co-founder of Granola (2022-present)
- Founded Socratic (acquired by Google, 2018)
- Founded Stack (acquired by Google Drive, 2022)
- Google Associate Product Manager (Gmail, Search, Maps, Google Now)
- Stanford CS graduate (2004-2007)
- Based in London
- Twitter | LinkedIn
How Granola Works
Granola captures meeting audio directly from your device. No bot joins your call. No awkward introductions.
Traditional AI note-taker:
┌─────────────────────────────────────┐
│ You + Participant + "Notetaker Bot" │
│ (visible to all) │
└─────────────────────────────────────┘
Granola:
┌─────────────────────────────────────┐
│ You + Participant │
│ (app runs silently on your Mac) │
└─────────────────────────────────────┘
The workflow:
- Start meeting. Granola captures system audio and mic input
- Take sparse notes. Jot bullet points as you normally would
- End meeting. AI enhances your notes using the transcript
- Share. One click to export or query the transcript
Your rough bullets become complete meeting notes in seconds. The transcript stays searchable but hidden. Audio is never stored.
Five Rules for AI Products
Pedregal outlined his framework in interviews with Every and Behind the Craft:
| Rule | Implementation |
|---|---|
| Don’t solve temporary problems | Ignored 30-minute meeting limits; knew context windows would expand |
| Go narrow, go deep | Built custom echo cancellation for AirPod switching edge cases |
| Context is king | Treat the LLM like an intern’s first day: smart, but needs framing |
| Marginal cost is opportunity | Use expensive cutting-edge models while big companies can’t afford to |
| Build products with soul | Ship a coherent vision, not a feature list |
On temporary problems:
“Predicting the future is now part of your job. Building complex chunking and reconciliation features would have been a waste of effort since newer models would handle longer meetings natively.”
On context as UX:
“Providing proper context to AI systems is actually a UX problem, not just a technical one.”
The Honda vs. Ferrari Insight
Pedregal argues that scale works against big companies in the current AI moment:
“At best, companies like Google can provide their users with a Honda-level product experience. You can give each of your users a Ferrari-level product experience.”
Because inference costs scale linearly with users, a startup serving 10,000 users can afford models that Google cannot deploy to 2 billion. This window closes as models get cheaper, but right now it creates space for quality differentiation.
Design Philosophy
Simple beats feature-rich. Granola looks like Apple Notes. No special UI, no dashboard, no meeting scheduler integration. Open it when you want. Close it when you don’t.
Immerse in feedback, design from principles. The team takes daily user calls and runs screens showing real-time feedback. But when designing, they work from first principles rather than building feature request lists.
50% feature cut unlocked growth. Early Granola had more features. Cutting half of them made the core experience feel inevitable.
No audio storage. Audio is cached during meetings for transcription, then deleted. No recordings, no privacy concerns, no awkward consent conversations.
Technical Architecture
| Component | Approach |
|---|---|
| Audio capture | System audio + microphone via native Mac APIs |
| Transcription | Real-time via third-party provider |
| Enhancement | GPT-4 class models for note expansion |
| Storage | Local notes + cloud sync (optional) |
| Speaker ID | “Me” and “Them” only (no diarization yet) |
The lack of speaker diarization is intentional for now. Real-time transcription models don’t support live speaker identification reliably. Pedregal expects this to be solved by model improvements, not custom engineering.
Key Takeaways
| Principle | Implementation |
|---|---|
| UX over models | Everyone runs the same LLMs; execution matters |
| Anticipate model progress | Don’t build workarounds for temporary limits |
| Edge cases define quality | AirPod switching, multi-channel audio, silence detection |
| Invisible AI | Best AI tools disappear into existing workflows |
| Small team advantage | Use expensive models while giants can’t |
Links
- Granola
- Twitter: @cjpedregal
- Interview: How to Build a Truly Useful AI Product
- Interview: The 5 Hidden Rules Behind Successful AI Products
- Podcast: Building Granola
Next: Jesse Vincent’s Superpowers Framework
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.