TL;DR: Research from Stanford, Harvard, and Carnegie Mellon reveals an uncomfortable pattern: novices consistently outperform experts when using GenAI. Low-skilled workers gain 35% productivity; high-skilled workers gain almost nothing. The culprits? Two psychological mechanisms: the Curse of Knowledge (experts can’t articulate what they know) and Process-Identity Fusion (experts reject outputs that don’t match “how things should be done”). The winners aren’t domain specialists who learned AI, nor AI experts suffering from craft attachment. They’re a new archetype: GenAI Generalist Managers who focus on what rather than how, and treat AI as a collaborator rather than a threat.
A Stanford/MIT study of 5,000 customer service agents found that low-skilled workers became 35% more productive with AI assistance. High-skilled workers gained almost nothing.
A Harvard study at Boston Consulting Group found that top performers didn’t just fail to benefit from AI—their output quality actually declined.
Carnegie Mellon confirmed the pattern across industries: “experts or highly skilled workers may see productivity decline” with AI assistance.
The OECD’s meta-analysis synthesized studies across countries and reached the same conclusion: “Less-experienced or lower-skilled individuals tend to see the largest productivity gains.”
If expertise equals advantage, then AI should amplify that advantage. Instead, the relationship is inverted. Your decades of experience have become the very thing holding you back.
This isn’t a temporary glitch. It’s a predictable consequence of how expertise actually works.
The Expert’s Paradox
Two psychological mechanisms explain why domain experts struggle with AI when conventional wisdom says they should benefit most.
The Curse of Knowledge
Once you know something deeply, you can’t easily imagine not knowing it. Experts have internalized their domain knowledge so thoroughly that it becomes tacit—embedded in intuition, pattern recognition, and automatic responses rather than conscious, articulable rules.
This creates a fundamental problem when working with AI. These systems require explicit instruction. They can’t infer tacit knowledge or read between the lines.
When an expert writes a prompt, they unconsciously assume shared context: domain conventions, quality standards, implicit constraints, organizational norms. The AI lacks all of this context and produces output the expert finds generic, obvious, or wrong.
A novice can’t assume shared context because they themselves lack tacit knowledge. Everything they want from AI must be articulated explicitly—which means their prompts are necessarily more complete.
The Stanford researchers captured this directly: “High-skilled workers may have less to gain from AI assistance precisely because AI recommendations capture the knowledge already embodied in their own behaviors.”
The expert concludes the AI “doesn’t understand” their domain. But the problem isn’t comprehension. It’s communication.
Process-Identity Fusion
The second mechanism cuts deeper. Experts don’t merely prefer their established processes—they define themselves through those processes.
Professional identity is constructed through years of mastering a specific way of working: the techniques, the craft, the accumulated knowledge of how things should be done. This creates what I call rigid what-how coupling: the outcomes experts seek become inseparable from the specific processes they believe should produce those outcomes.
When AI generates an output functionally equivalent to what the expert would have produced—but through an entirely different process—the expert doesn’t perceive equivalence. They perceive illegitimacy.
The objection isn’t “this output is inferior.” It’s “this is not the same thing because it wasn’t produced the right way.”
Consider an experienced artist with a distinctive brushstroke vocabulary developed over years of training. When AI produces a portrait achieving similar visual results through computation, the artist rejects it—not because it fails quality standards, but because accepting AI-generated work as equivalent would devalue the very process through which they built their professional identity.
The BCG study provides the evidence. Top performers didn’t just fail to benefit—their quality declined. This suggests active resistance, not passive failure. High performers were likely “correcting” AI outputs back toward their preferred process, reintroducing their established way of doing things at the cost of efficiency.
The Novice Advantage Explained
These two mechanisms explain the consistent pattern:
- Novices lack tacit knowledge, so they articulate requests explicitly in forms AI can process
- Novices lack process-based identity, so they evaluate AI outputs on functional merits
- Novices accept “good enough via a different path” because they have no attachment to any particular path
- Novices judge outputs by what they accomplish, not how they were produced
The Expert’s Paradox is a predictable consequence of expertise itself: the qualities that made experts valuable in the pre-AI era become liabilities when working with AI systems that require explicit instruction and produce outputs through unfamiliar processes.
The Two-Dimensional Success Framework
If domain expertise doesn’t predict success, what does?
Success requires competence along two dimensions—not one:
| AI-Naive | AI-Fluent | |
|---|---|---|
| Process-Attached | OLD MANAGERS: Failing on both dimensions; increasingly displaced | TECHNICAL SPECIALISTS: Know AI but suffer Process-Identity Fusion; reject outputs that violate craft |
| Outcome-Oriented | STRATEGIC LEADERS: Have vision but can’t translate to AI-assisted execution | GENAI GENERALIST MANAGERS: Combine outcome focus with AI fluency—the success archetype |
Why One Dimension Isn’t Enough
Being outcome-oriented is necessary but not sufficient. Strategic leaders often have exactly this quality—vision, cross-domain thinking, focus on results rather than processes. But many strategic leaders lack AI fluency. They don’t understand what AI can and can’t do. They can’t prompt effectively. They remain dependent on intermediaries to translate their vision.
A 2025 Gartner survey found that only 7-24% of executive team members are viewed as “AI savvy” by their own CEOs. Research in Harvard Business Review found that 76% of executives believe their employees are enthusiastic about AI, but only 31% actually are—leaders are more than two times off the mark.
Being AI-fluent is also necessary but not sufficient. Technical specialists understand AI capabilities deeply, but they still suffer from Process-Identity Fusion. They know the technology but can’t separate their professional identity from their established craft.
Based on research from Stanford, MIT, Harvard Business School, Carnegie Mellon, OECD, and Gartner.