Successfully Mitigating AI Management Risks to Scale AI Globally
Thomas Hutzschenreuter, Tim Lämmermann, Alexander Sake, Helmuth Ludwig
This study presents an in-depth case study of the industrial AI pioneer Siemens AG to understand how companies can effectively scale artificial intelligence systems. It identifies five critical technology management risks associated with both generative and predictive AI and provides practical recommendations for mitigating them to create company-wide business impact.
Problem
Many companies struggle to effectively scale modern AI systems, with over 70% of implementation projects failing to create a measurable business impact. These failures stem from machine learning's unique characteristics, which amplify existing technology management challenges and introduce entirely new ones that firms are often unprepared to handle.
Outcome
- Missing or falsely evaluated potential AI use case opportunities. - Algorithmic training and data quality issues. - Task-specific system complexities. - Mismanagement of system stakeholders. - Threats from provider and system dependencies.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Today, we're diving into one of the biggest challenges facing businesses: how to move artificial intelligence from a small-scale experiment to a global, value-creating engine.
Host: We're exploring a new study titled "Successfully Mitigating AI Management Risks to Scale AI Globally." It's an in-depth look at the industrial pioneer Siemens AG to understand how companies can effectively scale AI systems, identifying the critical risks and providing practical recommendations. To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: It's great to be here, Anna.
Host: Alex, the study opens with a pretty stark statistic: over 70% of AI projects fail to create a measurable business impact. Why is it so difficult for companies to get this right?
Expert: It's a huge problem. The study points out that modern AI, which is based on machine learning, is fundamentally different from traditional software. It's not programmed with rigid rules; it learns from data in a probabilistic way. This amplifies old technology management challenges and creates entirely new ones that most firms are simply unprepared to handle.
Host: So to understand how to succeed, the researchers took a closer look at a company that is succeeding. What was their approach?
Expert: They conducted an in-depth case study of Siemens. Siemens is an ideal subject because they're a global industrial leader that has been working with AI for over 50 years—from early expert systems in the 70s to the predictive and generative AI we see today. This long journey provides a rich, real-world playbook of what works and what doesn't when you're trying to scale.
Host: By studying a success story, we can learn what to do right. So, what were the main risks the study uncovered?
Expert: The researchers identified five critical risk categories. The first is missing or falsely evaluating potential AI opportunities. The field moves so fast that it’s hard to even know what's possible, let alone which ideas will actually create value.
Host: Okay, so just finding the right project is the first hurdle. What's next?
Expert: The second risk is all about data. Specifically, algorithmic training and data quality issues. Every business leader has heard the phrase "garbage in, garbage out," and for AI, this is make-or-break. The study emphasizes that high-quality data is a strategic resource, but it's often siloed away in different departments, incomplete, or biased.
Host: That makes sense. What's the third risk?
Expert: Task-specific system complexities. AI doesn't operate in a vacuum. It has to be integrated into existing, often messy, technological landscapes—hardware, cloud servers, enterprise software. Even a small change in the real world, like new lighting in a factory, can degrade an AI's performance if it isn't retrained.
Host: So it’s about the tech integration. What about the human side?
Expert: That's exactly the fourth risk: mismanagement of system stakeholders. This is about people. To succeed, you need buy-in from everyone—engineers, sales teams, customers, and even regulators. If people don't trust the AI or see it as a threatening "black box," the project is doomed to fail, no matter how good the technology is.
Host: And the final risk?
Expert: The fifth risk is threats from provider and system dependencies. This is essentially getting locked-in to a single external vendor for a critical AI model or service. It limits your flexibility, can be incredibly costly, and puts you at the mercy of another company's roadmap.
Host: Those are five very real business risks. So, Alex, for our listeners—the business leaders and managers—what are the key takeaways? How can they actually mitigate these risks?
Expert: The study provides some excellent, practical recommendations. To avoid missing opportunities, they suggest a "hub-and-spoke" model. Have a central AI team, but also empower decentralized teams in different business units to scout for use cases that solve their specific problems.
Host: So, democratize the innovation process. What about the data problem?
Expert: You have to treat data as a strategic asset. The key is to implement company-wide data-sharing principles to break down those silos. Siemens is creating a centralized data warehouse so their experts can find and use the data they need. And critically, they focus on owning and protecting their most valuable data sources.
Host: And for managing the complexity of these systems?
Expert: The recommendation is to build for modularity. Siemens uses what they call a "model zoo"—a library of reusable AI components. This way, you can update or swap out parts of a system without having to rebuild it from scratch. It makes the whole architecture more agile and future-proof.
Host: I like that idea of a 'model zoo'. Let's touch on the last two. How do you manage stakeholders and avoid being locked-in to a vendor?
Expert: For stakeholders, the advice is to integrate them into the development process step-by-step. Educate them through workshops and hands-on "playground" sessions to build trust. Siemens even cultivates internal "AI ambassadors" who champion the technology among their peers.
Expert: And to avoid dependency, the strategy is simple but powerful: dual-sourcing. For any critical AI project, partner with at least two comparable providers. This maintains competition, gives you leverage, and ensures you're never completely reliant on a single external company.
Host: Fantastic advice, Alex. So to summarize for our listeners: successfully scaling AI means systematically scouting for the right opportunities, treating your data as a core strategic asset, building for modularity and change, bringing your people along on the journey, and actively avoiding vendor lock-in.
Host: Alex Ian Sutherland, thank you so much for breaking down this crucial research for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for tuning in to A.I.S. Insights. Join us next time as we explore the future of work in the age of intelligent automation.
AI management, risk mitigation, scaling AI, generative AI, predictive AI, technology management, case study