In July, researchers at MIT published a study that became the tentpole talking point for the AI skepticism camp: 95% of businesses that tried using AI found zero value in it. Then in November, Upwork released a study showing that AI agents from OpenAI, Google, and Anthropic failed to complete many straightforward workplace tasks.

The AI doomers loved this. “See? It doesn’t work.” The AI boosters dismissed it. “The methodology was flawed.” Both reactions miss the point.

The technology works. The problem is how people deploy it.

The Wrong Starting Point

I’ve watched dozens of AI projects fail, at companies I’ve worked at and at companies around me. The failure pattern is almost always the same.

Someone in leadership reads an article about AI. They call a meeting. They say: “We need to be using AI.” A team gets formed. That team asks: “What should we build with AI?” They brainstorm. They pick something that sounds impressive. They build it. Nobody uses it.

The failure happened in the second sentence. “We need to be using AI” is the wrong starting point. The right starting point is: “We have a problem.”

The 5% that succeed share one trait: they started with a painful, expensive, well-understood business process and asked whether AI could make it cheaper or faster. Not cooler. Not more innovative. Cheaper or faster.

Three Patterns That Actually Work

Pattern 1: Automate the boring middle. Every business process has exciting parts (decision-making, creative work, customer interaction) and boring parts (data entry, formatting, reconciliation, report generation). The boring middle is where AI thrives because the cost of a mistake is low, the volume is high, and humans hate doing it.

I’ve seen this at every company I’ve worked at. At Wayfair, massive amounts of engineering time went to operational tasks that weren’t core product work. At Zipcar, fleet operations involved staggering volumes of manual data processing. At Vestmark, advisor time gets consumed by operational overhead. In every case, the boring middle is a goldmine for automation.

Pattern 2: Augment the expert, don’t replace the expertise. The best AI features I’ve built don’t try to replace the human. They make the human faster. Surface relevant information when it’s needed. Flag anomalies that a person might miss. Draft a first version that someone can review and refine.

This only works if you understand the expert’s workflow deeply. You can’t build good augmentation without spending time watching how people actually work. Not how they say they work in a meeting. How they actually work at their desk.

Pattern 3: Start with the error budget. Before building anything, ask: what happens when the AI gets it wrong? If the answer is “nothing much, a human catches it,” you have a great candidate for automation. If the answer is “a customer loses money” or “we violate a regulation,” you need much more careful design, and you need to map the failure modes before you write a line of code.

Why the Studies Are Directionally Useful

The MIT study’s methodology was narrow. 95% is a shocking number that probably overstates the problem. But the directional finding, that most enterprises are failing to extract value from AI, matches what I see across the industry.

The reason isn’t that AI doesn’t work. Most organizations approach AI deployment with a technology-first mindset instead of a problem-first mindset. They buy tools before they define problems. They hire AI teams before they understand their own workflows. They measure success by model accuracy instead of business impact.

The Upwork study is similarly useful. AI agents failing at “straightforward workplace tasks” tells us that tasks which seem straightforward to humans often involve implicit context, judgment calls, and domain knowledge that agents don’t have. The fix isn’t better models (though those help). The fix is better task decomposition, better context management, and better understanding of where the human needs to stay in the loop.

The Formula

If I had to reduce successful AI deployment to a formula:

Start with a problem that costs real money. Understand the workflow at the task level. Identify the tasks where AI adds value and the tasks where humans remain essential. Build the AI into the workflow, not as a separate tool. Measure business outcomes, not model metrics.

It’s straightforward. It’s also less exciting than announcing an “AI transformation initiative” and buying a platform license. But the straightforward approach is the one that works.

The 95% who failed didn’t fail because AI is overhyped. They failed because they started with the hype instead of the problem.