AIS Logo
← Back to Library
Building an Artificial Intelligence Explanation Capability

Building an Artificial Intelligence Explanation Capability

Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.

Problem Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.

Outcome - AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value).
- To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX).
- The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI).
- Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation