Building an Artificial Intelligence Explanation Capability
Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.
Problem
Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.
Outcome
- AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value). - To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX). - The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI). - Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business strategy. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a critical question for any company implementing artificial intelligence. Our guide is a fascinating study from MIS Quarterly Executive titled “Building an Artificial Intelligence Explanation Capability.” Host: It introduces the idea that to succeed with AI, companies need a new core competency: the ability to explain how and why their AI makes the decisions it does. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are pouring billions into AI, but many projects never see the light of day. What’s the core problem this study identifies? Expert: The core problem is trust. Business leaders are struggling with the "black box" nature of modern AI. When you have an algorithm making crucial decisions—about loans, hiring, or tax compliance—and you can't explain its logic, you have a massive risk management problem. Expert: The study points to real-world examples, like systems showing bias in parole decisions or incorrectly calculating government benefits. This opacity makes it incredibly difficult to manage risks, prevent negative consequences, and frankly, prove to executives that the AI is even creating business value. Host: So the black box is holding back real-world adoption. How did the researchers approach this problem? Expert: Instead of just staying in the lab, they went into the field. The study is built on deep case studies of two major organizations: the Australian Taxation Office, or ATO, and General Electric. They examined how these companies were actually deploying AI and overcoming these exact challenges. Host: And what did they find? What were the key takeaways from seeing AI in action at that scale? Expert: They found that AI presents four distinct challenges. First is 'Model Opacity,' which is that black box problem we just discussed. Second is 'Model Drift,' the tendency for an AI's performance to get worse over time as the real world changes. Expert: Third is 'Mindless Actions'—an AI will follow its programming, even if the context changes and its actions no longer make sense. And finally, the 'Unproven Nature of AI,' which is the difficulty in clearly connecting an AI project to bottom-line results. Host: That’s a powerful list of hurdles. So how do successful organizations get over them? Expert: By deliberately building what the study calls an "AI Explanation Capability," or AIX. It's not a piece of software; it's an organizational skill. And it has four key dimensions. Host: Okay, let's walk through them. What’s the first one? Expert: The first is 'Decision Tracing.' This is the ability to connect the dots from the data an AI receives to the output it produces. It's about making the model understandable, not just to data scientists, but to business managers and regulators. Host: The second? Expert: 'Bias Remediation.' This is about actively hunting for and fixing unfairness in your models. It involves careful data selection, systematic auditing, and ensuring the AI is representative of the populations it serves. Host: That sounds critical for any customer-facing AI. What about the third dimension? Expert: 'Boundary Setting.' This means defining the safe operating limits for the AI. It’s about knowing when a human needs to step in. The AI isn't the final judge; it’s a tool to support human experts, and you have to build the workflow around that principle. Host: And the final dimension of this capability? Expert: 'Value Formulation.' This is arguably the most important for business leaders. It’s the ability to articulate, measure, and prove the business value of the AI. It's not enough for it to be clever; it has to be valuable. Host: This is the core of the episode, Alex. Why does building this 'AIX' capability matter so much for businesses listening right now? Expert: Because it reframes the challenge. Success with AI isn't just a technical problem; it's an organizational one. The study shows that technology is only half the battle. Expert: Look at the Australian Taxation Office. They had to explain their AI to regulators. So, they used a simple, easy-to-understand model to validate the decisions of a more complex, "black box" neural network. This built trust because they could prove the advanced AI was behaving rationally. Host: So they built a bridge from the old way to the new way. What about General Electric? Expert: At GE, they were using AI to check contractor safety documents—a very high-stakes task. They built a system where their human safety experts could easily see the evidence the AI used for its assessment and could override it. They created a true human-in-the-loop system, effectively setting those boundaries we talked about. Host: So the key takeaway for our listeners is that deploying AI requires building a support structure around it? Expert: Exactly. It's about building a cross-functional team. You need your data scientists, but you also need your domain experts, your business leaders, and your legal team all working together to trace decisions, remediate bias, set boundaries, and prove value. AI cannot succeed in a silo. Host: A powerful conclusion. Let’s summarize. To unlock the value of AI and overcome its inherent risks, businesses can’t just buy technology. They must build a new organizational muscle—an AI Explanation Capability. Host: This means focusing on Decision Tracing, Bias Remediation, Boundary Setting, and Value Formulation. It’s a holistic approach that puts people and processes at the center of AI deployment. Host: Alex, thank you for making this complex topic so clear and actionable. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to bridge the gap between academia and business.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation