close

Talk to a product expert

AI in Engineering

McKinsey State of AI 2025: What It Means for Engineering Leaders

Only 5.5% of companies get real ROI from AI. McKinsey's 2025 data shows why and what it means for engineering leaders in product development and manufacturing.
MJ Smith
MJ Smith
CMO
Last updated:
January 7, 2026
4
minute read

Most engineering organizations are already using AI, but almost none are getting meaningful ROI from it. Why?

McKinsey's 2025 State of AI report, drawing on responses from 1,993 companies, puts a number on the gap, with only 5.5% of organizations seeing real financial returns from their AI investments. For engineering leaders in product development and manufacturing, the data is clarifying. The teams pulling ahead aren't just using better tools. They are also running fundamentally different workflows.

Cohort Analysis: AI High Performers

McKinsey’s data reveals an AI performance divide that’s starting to emerge. 

By now, “We’re using AI” has become a near-universal claim. Nearly 80% of organizations surveyed by McKinsey report regular use of generative AI in at least one function. But out of 1,933 survey participants, only 109 reported that more than 5% of their organization’s EBIT and “significant value” are attributable to the organization’s use of AI. That’s just 5.5% of respondents. This figure is incredibly consistent with MIT’s finding that only 5% of AI pilots generate measurable P&L impact.

What can we learn from the top 5%? They have operating discipline, following a clear set of best practices that separate them from the pack. Let’s explore the specifics.


High Performers Are All-In on AI, Investing 20% or More of Their Entire Digital Budget

The cohort of AI high performers is 3.6x more likely than others to say they intend to use AI to drive transformative change over the next three years. This datapoint demonstrates that performance begins with intent. Companies that commit to transformative change do things differently.

For one, senior leadership is more involved. Indeed, the data shows that AI high performers are 3x more likely to report strong senior leadership ownership and engagement. According to McKinsey, this extends both to setting the strategy and role modeling the use of AI. It’s up to leadership to set the foundation by answering fundamental questions like: What’s the AI vision? Should we set up a central governance org for AI? And what’s our organizational policy on human-in-the-loop quality control for AI outputs?

But it’s not just leadership involvement that shifts when organizations target transformative change. There’s budget implications too. One of the most striking data points in the report: more than one-third of AI high performers spend more than 20% of their digital budgets on AI. That makes them 5x more likely to make a big bet on AI than the rest of the cohort.

Because CoLab works with dozens of executive teams at large manufacturing orgs, we’re sometimes asked “how much should I budget for an AI initiative?” This data point offers a clear benchmark:

Companies looking for transformative change, P&L impact, and top 5% performance should plan to invest 20% or more of their total digital budget in AI. What would that look like for your company?


The Agentic Gap: High Performers Embrace Agentic Workflows, Not Just ChatGPT

The McKinsey data tells us that while 79% of organizations say they’re “using generative AI,” fewer than 10% report that they’re scaling AI agents in any function. In product development specifically, 73% of respondents are not using AI agents at all. 

This finding is especially interesting, when examined in the context of a second finding from the report: that high performers are nearly 3x more likely to have fundamentally redesigned workflows as part of their AI efforts.

In fact, betting on agentic AI and being able to transform workflows go hand in hand. There’s a fundamental difference between widespread use of ChatGPT and a structured rollout of agentic workflows. While ChatGPT can speed up tasks, employees must repeatedly enter prompts to get a response. Large language models (LLMs) do not complete multistep workflows on their own. By contrast, AI agents operate within production environments. They have sophisticated prompts running in the background, allowing them to complete more complicated tasks. They can also benefit from more persistent memory and better access to organizational data and information.

In summary, the companies getting the most out of AI are reimagining workflows. But you can’t redesign workflows if humans must orchestrate each individual step. This is why agentic AI is so powerful. But it’s still underadopted. That’s probably because implementing AI agents takes a lot more effort and thought than giving employees access to ChatGPT. But the high performers are tackling the people, process, and technology hurdles head on, so they’re reaping the benefits downstream.


How Are Companies Using Agentic AI in Advanced Manufacturing?

One of the most interesting sections of the report looks at the segment of companies that are “scaling” or have “fully scaled” AI agents within one or more functions. Within this cohort, McKinsey examines whether different industries are adopting agentic AI for different use cases.

Let’s look at the advanced manufacturing cohort, a group of 118 respondents that includes advanced electronics, aerospace, automotive and assembly, and semiconductors. These are the most popular use cases for advanced manufacturing companies that are scaling or have scaled AI agents:

  • #1: Software engineering (10%)
  • #2: IT (9%)
  • #3: Product development (6%)
  • #4: Knowledge management (5%)
  • #5: Sales and marketing (5%)

It’s no surprise to see software engineering at the top of the list, even if advanced manufacturing companies employ fewer software engineers than software development companies. 

At CoLab, we’re most excited about AI agents in product development and for knowledge management. Advanced manufacturing companies have vast volumes of technical knowhow accumulated over decades. But much of it often lives in employee’s heads, rather than being documented and organized. For example, CoLab’s own survey from 2025 found that while most engineering leaders say applying organizational design standards and guidelines is critical, only 45% of design standards are documented, up to date, and consistently referenced in design reviews.

This is the gap CoLab was built to close. When engineers run design reviews in CoLab, expert feedback is automatically captured alongside the 3D geometry it references, not lost in meeting notes or email threads. That structured knowledge becomes the foundation for AI agents like AutoReview, which checks new designs against your standards and guidelines automatically, and AI Lessons Learned, which surfaces relevant insights from past programs at the moment they're most useful. The knowledge your senior engineers have accumulated over decades stops living in their heads and starts working for the whole team.

As the high performer cohort that McKinsey studied reveals: it takes the right organizational approach to generate real returns. CoLab is already working with dozens of engineering leadership teams on their agentic AI strategy for global engineering.

The teams getting ROI from AI aren't just using better tools; they are running better workflows. See how AutoReview can help your team catch critical design issues earlier and scale their collective knowledge.

Share this post
CopyWhatsappxfacebooklinkedin
MJ Smith
MJ Smith
CMO
linkedin
A former product manager for industrial equipment, MJ now leads marketing at CoLab.
Want to see AutoReview in action?
Get a custom demo from a fellow engineer

About the author

MJ Smith

A former product manager for industrial equipment, MJ now leads marketing at CoLab.