Ethan Mollick's Four Rules for AI in Knowledge Work
Table of content

Ethan Mollick is an associate professor at the Wharton School who studies how AI changes work, entrepreneurship, and education. He writes the newsletter One Useful Thing with over 350,000 subscribers and was named one of TIME’s 100 Most Influential People in AI for 2024.
Mollick (author of Co-Intelligence) studied 758 consultants at BCG and found a critical pattern: AI isn’t universally helpful. Success depends on whether a task is inside or outside the “Jagged Frontier.” See The Jagged Frontier for a detailed breakdown of this research.
The Four Rules of Co-Intelligence
1. Always Invite AI to the Table
Start every task by trying AI first. The Jagged Frontier is unpredictable—tasks you assume AI can’t handle might be inside its capability boundary.
In practice:
- Open AI tool before starting research
- Draft with AI, even if you rewrite everything
- Use AI for first-pass brainstorming on every project
2. Be the Human in the Loop
Never deploy AI output without review. The BCG study found a 19% correctness decrease when AI was used on tasks outside its frontier.
In practice:
- Verify every fact AI provides
- Rewrite AI drafts in your voice
- Test AI-generated code before deployment
- Compare AI summaries against source material
3. Treat AI Like a Person (But Tell It What Kind)
Give AI a role and expertise level. Role-playing produces better reasoning and output quality than generic prompts.
In practice:
You are an experienced product manager who has shipped
20+ features at B2B SaaS companies.
Review this PRD and identify gaps in the user research
section. Be specific about what questions remain unanswered.
You are a skeptical peer reviewer. Challenge the logic
in this argument and identify unsupported claims.
4. Assume This Is the Worst AI You’ll Ever Use
Build workflows assuming capabilities will expand. The Jagged Frontier shifts—tasks where AI fails now may succeed in 6 months.
In practice:
- Revisit tasks where AI failed 3 months ago
- Don’t write off entire categories (image generation, coding)
- Focus on learning prompting patterns, not memorizing model limits
The Jagged Frontier
AI capability boundaries are irregular. The BCG study measured this across 758 consultants:
| Metric | Inside Frontier | Outside Frontier |
|---|---|---|
| Tasks completed | +12.2% | -19% correct |
| Speed | +25.1% faster | N/A |
| Quality | +40% higher | Worse outcomes |
You can’t know whether a task is inside or outside the frontier without testing. The boundary doesn’t follow intuitive patterns.
Inside the frontier: Marketing copy, standard code generation, summarizing meeting notes, brainstorming product features
Outside the frontier: Complex math proofs, novel strategic insights, real-time data tasks, catastrophic failure decisions
Two Working Modes: Centaur vs Cyborg
Centaur Approach
Clear division of labor. AI handles generation, human handles verification.
Example: AI generates draft → human rewrites for voice/accuracy → AI proofreads → human approves
Best for: High-stakes content (legal, medical), domain expertise tasks, workflows where errors cost money
Cyborg Approach
Deep integration. Human and AI iterate in real-time.
Example: Human starts outline → AI expands section 2 → human edits while AI drafts section 3 → human reworks intro based on new conclusion
Best for: Exploratory work, creative projects, rapid prototyping, learning new domains
Practical Prompting Techniques
Conversational Prompting
Explain context, ask questions, iterate based on output.
I'm writing a product launch email for a B2B audience.
Our product is project management software for remote teams.
Main benefit: async standups that don't require everyone
online at once.
Can you draft 3 subject line options? Make them specific,
not generic "introducing X" lines.
Follow-up prompt:
Option 2 is closest, but "asynchronous" is too technical.
Rewrite using simpler language for marketing managers.
Structured Prompting
For complex tasks, use this template:
Role: [Who is the AI?]
Task: [What specifically should it do?]
Steps: [How should it approach this?]
Constraints: [What to avoid?]
Personalization: [Adapt to what?]
Example:
Role: You are a senior data analyst. Your specialty is SaaS cohort analysis.
Task: Analyze this cohort retention data and identify the
biggest drop-off point.
Steps:
1. Calculate retention rates for each time period
2. Identify where the steepest decline occurs
3. Propose 3 hypotheses for why users churned at that point
Constraints:
- Don't make up data points
- Focus on actionable insights, not general observations
Personalization: Our product is used by marketing teams at
companies with 50-200 employees. Keep recommendations relevant
to this ICP.
When to Use AI
Best use cases: First drafts, brainstorming lists, code generation (standard functions), summarizing meeting notes, rewriting for different audiences, explaining concepts, email templates, competitive analysis, idea expansion, format conversion, tone adjustment, research synthesis, scenario planning, feedback generation, task breakdown
Key finding from BCG study: Average performers gain 20-80% productivity. Top performers gain less (already near ceiling).
When NOT to Use AI
Don’t use AI when:
- Learning is the goal: AI prevents deep understanding
- High accuracy is non-negotiable: Legal contracts, medical advice, financial calculations (hallucination risk too high)
- Failure modes are unknown: You can’t evaluate correctness on new task types
- Real-time data required: Training cutoffs mean outdated information
- One error has severe consequences: Safety-critical systems, regulatory compliance
The core question: Can you recognize when AI is wrong? If no, don’t use it.
Implementation Strategy
Week 1: Pick 5 routine tasks. Try AI on each. Note which are inside/outside the frontier for your work.
Week 2: For inside-frontier tasks, choose Centaur or Cyborg mode, create prompt templates, set review checkpoints.
Week 3: Compare AI-assisted output to your previous work. Measure speed, quality, and error rate.
Ongoing: Monthly, revisit tasks where AI failed. Capabilities shift constantly.
Key Takeaways
- Test AI on every task type—boundaries are unpredictable
- Always review AI output, you’re responsible
- Role-playing (giving AI a persona) improves output quality
- Centaur mode for high-stakes work, Cyborg for exploration
- Capabilities shift—revisit failed tasks monthly
- Don’t use AI when learning is the goal
Next: Building Your Personal AI Operating System
Get updates
New guides, workflows, and AI patterns. No spam.
Thank you! You're on the list.