Somewhere right now, a company is launching an AI pilot. There’s a presentation deck, there’s excitement and a vague promise to “transform your life”. In about six months, that pilot might just disappear. Why?
Well, this happens constantly. MIT’s study on enterprise AI found that roughly 95% of generative AI pilots fail to deliver measurable financial returns. Not “underperform” – fail. The researchers analysed 300 public AI deployments and interviewed 350 employees across industries and the pattern was consistent: companies launch AI projects with enthusiasm, then watch them stall before reaching production.
The problem usually has nothing to do with the technology itself. AI integration works fine when set up properly. The issue is the way companies try to use it: the planning, the rollout, how they manage change and whether they’re honest about what it actually takes to make it work.
If you’re in media, publishing or fintech (industries where AI integration is now table stakes), understanding these patterns might save you a lot of money and frustration.
The Pilot Trap
The most common AI integration mistake is treating pilots as progress. Companies run small experiments, see promising results in controlled conditions, then assume the hard part is over but it rarely is. Getting from “this works in a lab” to “this runs smoothly every single day” takes way more work than expected.
According to S&P Global’s report on AI in banking, only 5% of integrated AI pilots have been scaled into actual workflows. The gap between demonstration and deployment is where AI initiatives die. The reasons cited include models that can’t adapt to real-world complexity, organisational resistance and (perhaps most importantly) nobody clearly owning the outcome.
When everyone is responsible for an AI integration project, no one actually is. Without someone clearly owning it, pilots just become endless experiments that eat up budget but never deliver results.
INMA, which helps news publishers adopt new technology, found a related problem: companies often ask employees what they think about AI tools only after they’ve already bought and installed them. That turns the whole thing into a sales pitch instead of actual change management. People reject tools they weren’t asked about. Obvious when you think about it but ignored way too often.
Building the Platform Before Proving It Works
Another expensive mistake: trying to build an AI integration before you’ve proven AI works for one specific thing.
The urge to build a big reusable platform feels smart from a tech perspective, but it’s a disaster for actually getting results. Companies want to build frameworks that work everywhere, shared tools, fancy infrastructure. Technically, that makes sense. But from a business perspective, it spreads the impact so thin that nobody can see any real benefit.
Start narrow. Automate one boring process, speed up one approval workflow, prove that works and then expand. This creates real wins that justify spending more money and make people believe AI helps.
Publishing companies learned this lesson the hard way. The Chicago Sun-Times published a summer reading guide in May 2025 that listed 10 completely made-up books created by AI. The CEO later called it turning the paper into “the poster child of ‘What could go wrong with AI?’” These failures happen when companies use AI for important stuff before they’ve figured out basic quality checks.
The Data Question
Data quality accounts for 43% of AI project failures and this problem is worse in industries with fragmented systems. Media companies typically run content across multiple CMSes, ad platforms, analytics tools, audience databases. Fintech companies deal with legacy banking infrastructure, regulatory systems, third-party integrations. Getting clean, consistent data from these environments requires work that nobody finds exciting and often it just doesn’t get done.
The World Economic Forum made this point in: “Imagine your customer relationship manager and enterprise resource planning system both contain the same contact. In one system, they are a customer; in the other, a supplier. The email addresses match but one record includes a middle initial and the other doesn’t. Which record is correct? Which system is the source of truth? And which version does your AI act on?”
This is what enterprise AI looks like in practice. Success means fixing your data problems before you worry about fancy AI models. The smartest AI in the world will give you garbage results if you feed it messy, incomplete, or wrong data.
Letting AI Run Wild Without Supervision
The Federal Reserve Bank of Richmond published research showing that banks with higher AI intensity incur greater operational losses than those using less AI. The relationship was driven by three areas: external fraud, problems with customers and system failures.
This doesn’t mean AI is bad for banks. It means AI without proper oversight creates new problems and makes existing ones worse. The research found these issues were especially bad at banks with weak oversight.
Publishing has the same pattern. When you let AI handle important editing work without enough human review, it makes terrible mistakes: deleting whole paragraphs, changing punctuation in ways that flip the meaning or just making stuff up.
World Economic Forum research found that 66% of employees trust AI output without checking it, and 56% say they’ve made mistakes at work because of it. Shall we all just avoid AI in work processes? No, we need to build verification into the process from the very start.
What Works Then?
Where is the way of getting real value from AI?
Start with a specific problem, focus on technology later. Start with “we waste 10 hours a week doing this thing”. Focus on specific problems where you know exactly what AI will do, who it helps and how you’ll measure if it worked.
Say “yes” to: “Our support team spends half their time answering the same five questions over and over”.
Say “no” to: “We need an AI strategy”.
Assign clear ownership. Someone needs to be responsible for both setting it up and making sure it works. MIT’s research found this was one of the biggest differences between projects that succeeded and ones that failed.
Partner rather than build from scratch. Purchasing AI tools from specialized vendors succeeds about 67% of the time, while internal builds succeed only one-third as often. This matters especially in regulated industries like finance, where many companies are building their own systems even though buying works better. The urge to build everything yourself usually comes from wanting control or thinking you’re special (you are though). But unless AI is your main business, you’re better off buying from experts who’ve already figured out the hard stuff.
Invest in people, not just technology itself. AI projects succeed or fail based on whether your company is ready for them, not how advanced the technology is. Many failures come from bad planning and poor change management. This means training people, managing change, explaining what’s changing and why and involving the people who’ll use the tools in deciding how they work. Many spend 90% of their AI budget on technology and 10% on helping people adapt. Try to flip that around.
Take your time. Your productivity might actually drop at first when you start using AI. Major changes to how work gets done take time to pay off. Follow a careful pace: prove it works in one area over 3-6 months, write down what you learned, then expand step by step instead of trying to change everything at once.
The Bottom Line
Failures follow predictable patterns: trying to do too much at once, moving too fast, ignoring data problems, skipping human oversight and treating test projects as if they’re finished products.
The 95% failure rate sounds scary, but it’s actually helpful – you don’t have to make these expensive mistakes yourself. The patterns are clear and the solutions are known. Approach AI with patience, clear objectives and realistic expectations. The technology works. Is the organisation around it ready?
Read our articles on other topics:
Privacy-First AdTech And What To Wait From It in 2026?
CMS Migration Challenges: What Goes Wrong And How to Avoid It



