post

AI Doesn't Flatten Skill. It Amplifies It.

Notes on cognitive surrender, taste, and the structures that turn AI into a real multiplier.

Apr 27, 2026 · 6 min read

Only 5% of enterprise generative AI pilots are working. The other 95% delivered zero measurable return on roughly $30–40 billion in US enterprise spend, per a 2025 MIT report. The 5% isn’t random. Organizations that shipped well before AI are shipping better with it. Organizations that struggled with quality, judgment, and decision-making before AI have gotten worse, in some cases measurably. The same split shows up inside teams. A 2025 METR randomized trial found experienced developers using AI assistance felt 20% faster while actually running 19% slower. Feeling and reality have come apart.

I think we’re watching AI surrender at scale. AI doesn’t change what an organization is good at. It amplifies it. Strong judgment compounds. Weak judgment compounds too, in the wrong direction.

A recent Wharton paper from Shaw and Nave draws a line between two ways people use AI. Cognitive offloading is when you delegate parts of a problem during deliberation. Cognitive surrender is when you accept generated reasoning without evaluating it. Surrender bypasses the deliberation. It looks like work. It produces output. It does not produce thinking.

I see a version of this scene almost weekly. Someone hands me a fourteen-page document the model wrote in five minutes. The framing is wrong. The assumptions haven’t been tested. The acceptance criteria don’t match what users actually need. But it looks like a document. The person delivering it looks productive. They’ve done the part of the work the model could do without them, and not much else.

We pay people for expertise, not output. The model produces output for a fraction of what a salary costs. The job is to apply expertise to what it produced: taste, judgment, context, the corrections only an expert would catch. Skip that step and we’ve paid a full salary for what the model did on its own. Bad context also produces bad output that gets re-run, re-prompted, or shipped and reverted. The companies struggling with AI outcomes are often struggling with AI judgment first.

Surrender takes two forms. In your own craft, it’s a choice. Outside it, it’s the default. Polished output looks competent in domains where you can’t see what’s missing. A PM generating engineering plans. An engineer generating product specs. A non-designer producing a UI. To the person who made it, it looks fine. To anyone with ten years inside that craft, the gaps are obvious. Taste is what tells you something’s wrong before you can articulate why. You don’t have it yet for crafts you haven’t lived inside.

The popular narrative is that AI flattens skill. The pattern points the other way. AI accelerates people who already have taste. It doesn’t manufacture it. The floor moved up. The ceiling moved up further. The right move inside an AI-accelerated team is depth, not breadth. I want PMs doing sharper product management so my engineers solve better problems. I don’t want a PM generating code that costs an engineer more time to review than to write. I want designers shipping interfaces that work for real users, faster. Each function compounds when AI gets pointed inside its own craft, not across someone else’s. The work gets harder, more specific, and more satisfying.

I keep watching this happen across every function I work with, not just engineering. The principle isn’t only about software. Every function has its own taste, judgment, and context. Sales knows the actual buying motion. Customer service knows where the product breaks. Operations knows the constraints engineering will rediscover the hard way. Surrender is possible in every one of them, and so is real amplification. The move is the same in each: AI-enable the function inside its craft, don’t outsource the craft to AI.

At the org level, surrender becomes a tax on the strong. They don’t just produce more in this environment. They spend their time cleaning up plausible-but-shallow output from people who surrendered. They become force multipliers for everyone except themselves. That’s how the best people end up more burnt out than ever. Organizations that don’t actively guard against it lower their own standards by accident. Talent density matters more than it did. The gap between top and average widens, inside companies and across them.

On my team we’ve been chasing this shape: tools that make judgment more efficient instead of replacing it. Build context up across phases instead of restarting at each handoff. Make tradeoffs visible. Put the right framing in front of the right person at the right gate. Validation prompts should be adversarial. What’s wrong with this, not is this good. The cross-functional version is where it really compounds. I’ve been calling this the context snowball. As work moves from product to design to engineering, context accumulates with each phase, so the next decision is sharper, the spec is more accurate, and the output stops looking like a confident hallucination. The wrong instinct is to remove the human from the loop. The right one is to shape the loop so the human’s judgment compounds.

The context snowball: as work moves from product to design to engineering, each function deposits its taste and expertise into a shared, growing context. user need, success criteria product shape, interaction taste design constraints, edge cases engineering context accumulates with each phase
Each function deposits its expertise into a shared context that grows as it rolls forward. By the time work reaches engineering, the spec carries everyone's judgment, not just one person's.

AI success is a people problem and a collaboration problem. Quality rests on the taste, context, and experience of the operators, and on whether they can compound that judgment together across functions. The organizations that win the AI era won’t be the ones that replaced their people. They’ll be the ones that hired for taste, expected people to own what they ship regardless of what produced the first draft, and let each function go deep in its own craft. The tools and structures around them compound collaboration instead of removing it. There will be others.

To use AI well, you have to work well together. AI raises the stakes on that. It doesn’t lower them.

The model is a multiplier. Multiply zero and you still get zero. The work is what you bring to it. The rest is surrender.