The Jagged Frontier

Table of content

AI isn’t uniformly good or bad. It’s jagged.

Ethan Mollick and researchers at Harvard Business School coined the term “Jagged Frontier” in their 2023 study with Boston Consulting Group. They tested 758 consultants (7% of BCG’s workforce) on realistic tasks. The results surprised everyone.

The Study Results

Consultants using GPT-4:

But here’s where it gets interesting. On one task designed to exploit AI blind spots, consultants without AI outperformed those with it. Humans got it right 84% of the time. Humans with AI dropped to 60-70%.

The frontier isn’t a clean line. It’s jagged.

Why Jagged?

Picture a fortress wall with towers jutting outward and gaps folding inward. Some tasks that seem equally hard sit on different sides of that wall.

Inside the frontier (AI excels):

Outside the frontier (AI struggles):

The wall is invisible. You only discover it through use.

The Skill Leveling Effect

Here’s the most disruptive finding: AI raises everyone to near-expert level.

The lowest-performing consultants saw a 43% boost in performance. Top performers improved too, but less dramatically. This mirrors what happened when steam shovels replaced manual digging. Your ability to dig rocks became irrelevant.

For knowledge work, this means:

Two Ways to Work: Centaurs vs Cyborgs

Mollick identified two successful patterns for navigating the frontier:

Centaurs maintain a clear boundary between human and AI work. They strategically delegate. Human torso, horse body. Distinct separation.

Human: Decide on statistical approach
AI: Generate the graphs
Human: Interpret results
AI: Format the report

Cyborgs blend human and AI throughout. No clear handoff point. They start sentences for AI to complete. They revise AI output, then ask for more. Constant interweaving.

Human: Start paragraph about...
AI: ...completes it
Human: Adjusts tone, asks for alternatives
AI: Provides variations
Human: Combines pieces, continues...

Both approaches beat pure human or pure AI work. The failure mode is “falling asleep at the wheel”: accepting AI output without verification.

Mapping Your Frontier

The jagged frontier differs by domain. What works for consulting may not match your field. You need to map your own.

Run experiments:

  1. Try the task with AI
  2. Try without
  3. Compare quality, speed, correctness
  4. Note where AI surprised you (both ways)

Build a personal map:

Task                    | AI Performance | Notes
------------------------|----------------|------------------
First draft emails      | Excellent      | Saves 80% time
Complex debugging       | Mediocre       | Halluccinates APIs
Research synthesis      | Good           | Verify all claims
Brainstorming           | Excellent      | More creative than me
Precise formatting      | Poor           | Loses count, drifts

Update this map as models improve. The frontier expands.

What You Can Steal

  1. Stop asking “Can AI do this?” Ask “Where exactly does AI fail at this?” The frontier isn’t binary.

  2. Use the three-layer workflow to catch frontier crossings: Spec (human) → Implement (AI) → Verify (human).

  3. Watch for skill leveling in your team. Junior members with AI may match seniors without it.

  4. Choose your integration style. Centaur if you want clear accountability. Cyborg if you want maximum throughput.

  5. Map aggressively. Every surprising AI failure reveals frontier position. Every surprising success does too.

The researchers called it “dancing on the jagged frontier.” That’s exactly what effective AI use looks like: constant movement, constant probing, never assuming you know where the edge lies.

Next: Three-Layer Workflow for a practical framework to work within the frontier.