AI in Engineering
Jon Filson
Jon Filson
October 3, 2025
6
min read

MIT Nanda Report: How manufacturers and engineers can adopt AI in a way that works

Only 5% of firms succeed with AI, an MIT AI report shows. Who wins? For manufacturers and engineers, success happens when embedding AI where it learns and compounds value.

When MIT’s State of AI in Business 2025 report was published recently, it made a claim that instantly shook the business world: despite $30–40 billion invested in generative AI, 95% of organizations are seeing no measurable return

The idea at the core of the report: AI adoption effort is high, but transformation is low. The report calls this gulf the “GenAI Divide.”

But the real lesson we saw? It’s working for some. Our ears perked up when we saw the following assessment. Startups like CoLab are doing well in this ecosystem, especially when partnering with larger companies: 

“Some large companies’ pilots and younger startups are really excelling with generative AI … it’s because they pick one pain point, execute well, and partner smartly with companies who use their tools,” Aditya Challapally, the lead author of the report in Fortune magazine

For CoLab, this report and the debate around it highlight a truth we see every day: AI is not failing, but it is failing to deliver when it’s treated as a surface-level add-on rather than embedded into the core of operations.

In this article we want to examine the report, what it found, the fallout and what lessons it has to offer. While heartened with what it found, we aren’t responding with a hot take. We wanted to measure the reaction and listen to what credible thought leaders had to say first, so we could provide our own reasonable response. 

What the MIT Nanda report found

The research team behind the MIT report reviewed over 300 public AI implementations, interviewed 52 organizations, and surveyed 153 senior leaders. The findings reveal:

  • High adoption, low achievement: Over 80% of firms have piloted tools like ChatGPT or Copilot, but only 5% of custom AI pilots ever reach production.
  • The wrong priorities: Budgets skew heavily toward sales and marketing, where ROI is visible but shallow, while back-office automation—where ROI is stronger—remains underfunded.
  • The learning gap: Most AI tools fail because they don’t learn, adapt, or remember. “Most GenAI systems do not retain feedback, adapt to context, or improve over time.”
  • Shadow AI: Employees are informally adopting consumer tools like ChatGPT, often without IT approval, creating security and IP risks.
  • Partnerships matter: External partnerships succeed twice as often as internal builds.

The report’s conclusion is blunt: “The core barrier to scaling is not infrastructure, regulation, or talent. It is learning.”

Source: "The GenAI Divide: State of AI in Business 2025," MIT NANDA, July 2025.

Support, criticism and context for the MIT Nanda report

The findings drew immediate attention, but support wasn’t unanimous. Some applauded the report:

“As researchers who study AI and teach about AI transformation and technology, we believe that many leaders are making the same mistake they made a decade earlier with digital transformation: encouraging experimentation, which is good, but falling into the trap of letting experimentation run wild, which is counterproductive.” 
— Harvard Business Review, Beware the AI experimentation trap

But critics questioned the methodology. Futuriom argued:

“We aren't AI cheerleader purists—there are certainly many problematic areas of AI as well as investment patterns that warrant bubble fears—but the MIT NANDA report paints an irresponsible and unfounded picture of what's happening in Enterprise AI.” 
— Futuriom, Why We Don't Believe MIT NANDA's Weird AI Study

Wharton’s Kevin Werbach noted:

“The fact that this report fails to demonstrate that most generative AI deployments fail does not, of course, mean they are successful. There are good reasons to wonder whether generative AI is creating returns to justify the massive level of investment. But it's not going to be a black and white matter.”

This mix of reactions points to where we really are in the AI adoption process. We’re in the “messy middle.” As Dr. Alexander Korogodsky of the University of Miami observed on Linkedin:

“This isn’t failure, it’s the messy middle of adoption. The real divide is between those who adapt workflows for AI, and those who expect plug-and-play miracles.”
- Dr. Alexander Korogodsky of the University of Miami

Where we align at CoLab on AI adoption

At CoLab, we see both sides as partially correct. The MIT report may overstate failure rates and we would love to see the full data as well (a common criticism of the report). 

But its core insight resonates: most companies are not embedding AI where it counts. We see this every day. And the evidence clearly supports this position: No one is suggesting that 50 per cent of companies have successfully adopted AI or 40 per cent or 25 or even 10. Despite massive interest, something is holding industry back, not just in manufacturing but across the spectrum.  

Like the MIT Nanda report, we see consistent patterns across businesses:

  • Attempts are too massive and lack focus.
  • Employees are left to experiment on their own, creating inefficiencies and risks.
  • AI is siloed in functions like sales or marketing, rather than integrated into core workflows.
  • Quick wins are prioritized over long-term transformation.

When does it work? Success comes from embedding AI into the core of a business’s processes, not treating it as an opportunistic add-on or fun, iterative process. 

At the same time, companies often are not able to alter their entire ecosystem to introduce AI. A move that size typically becomes too daunting and difficult. 

The MIT report supports this perspective: The future is not just about copilots or static tools, it argues, but interconnected systems that learn, remember, and coordinate across business functions. 

Ultimately, successful AI adoptions do not disrupt how companies work, but enhance existing processes. So AI components must be built into the business’s core function, and spread from there, as opposed to being implemented through massive, sweeping change or by random attempts. We have observed companies have the most success when adopting this “insert at core” and expand approach.  

Ultimate lessons for manufacturers, engineers and product development

The MIT study makes clear that back-office and workflow-specific automation deliver the strongest ROI. This matches what we’ve seen (and why we have built out CoLab’s offerings as we have).

Tools right now succeed when they directly address repetitive, error-prone, or resource-intensive processes that are at the core of the business. At the most simple level, where AI can typically help is making companies run better, not achieve quick and easy financial gains. The wins come through reducing costs and massive development and efficiency savings. 

At CoLab, we have applied these principles through case studies that validate MIT’s themes:

  • Design review as an entry point: AI can apply standards, identify repeated errors and “remember” like a skilled engineer. This is a high-value, low-disruption workflow, the kind the MIT report shows is most effective.
  • AutoReview: Our system aligns with the report’s call for systems that retain feedback and adapt over time. AutoReview insists on a human decision behind any AI action - you have to give thumbs up or down on any AI feedback once it catches an issue. It also must cite any specific standard while flagging an issue. This allows anyone working to see the evidence before making a decision and reinforces continuous learning. The goal is to retain feedback and adapt on both the AI and the human side of engineering.  
  • The Design Engagement System: By capturing input across suppliers, engineers, and marketers, our DES allows us to embed AI into the heart of collaboration. This parallels the report’s insight that external partnerships and integrated systems double the success rate of adoption. Our system integrates with existing workflows and PLM. This prevents the need for massive change that causes so many AI approaches to not even be attempted by larger companies.  

These aren’t speculative findings for us. We have working examples of the report’s conclusions applied to engineering.

Fallout and the opportunity

The report has clearly rattled executives. As we’ve heard from clients: “So how do we become part of the 5%?”

The answer lies in the lessons highlighted by the report and validated by practice:

  • Build less, buy smart: Internal builds fail twice as often as vendor partnerships.
  • Decentralize adoption: Empower line managers and power users, not just central AI labs. AI needs to be part of your company. 
  • Embed AI at the core: Focus on workflows that matter most, not just sales pilots or one-off experiments.
  • Look ahead: The future is not just about copilots or static tools, but interconnected systems that learn, remember, and coordinate autonomously across business functions.

Next steps for AI adoption at scale

For engineering leaders, the path is clear. AI is not failing. It is failing when misapplied. The MIT report is not the final word, but it is a wake-up call. Companies that stop chasing quick wins and start embedding AI where it learns, integrates, and compounds value at the core of the company will be the ones that will be next to lead.

Start embedding AI where it learns, integrates and compounds value over time. This is the lesson. The GenAI Divide identified by the MIT Nada report is not going to last forever. But the organizations that figure out a way across it soonest will be the ones that define the next era of business.

For more on the MIT Nanda study: Building an AI Strategy That Works

Share this post
CopyWhatsappxfacebooklinkedin
Jon Filson
Jon Filson
Director of Content Marketing
linkedin
Jon Filson is an industry analyst and writer for CoLab. Email him at jonfilson@colabsoftware.com.