Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective
David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.
Problem
Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.
Outcome
- A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation. - Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals). - AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI. - The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study titled "Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective." Host: It explores why so many organizations seem to struggle with putting their responsible AI principles into actual practice, looking at the pressures, the challenges, and the key roles people play inside these companies. Host: With me is our analyst, Alex Ian Sutherland, who has taken a deep dive into this study. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, we hear a lot about AI ethics and all these new responsible AI frameworks. But this study suggests there’s a massive gap between what companies *say* they'll do and what they *actually* do. What's the core problem here? Expert: That's the central issue. The study finds that despite growing awareness of AI's risks, the principles often remain just that—principles on a webpage. This gap between policy and practice means the goals of making AI safe and ethical are not being met. Expert: This creates huge risks, not just for society, but directly for the businesses themselves. It undermines customer trust and leaves them exposed to future legal and reputational damage. Host: So how did the researchers approach such a complex organizational problem? Expert: They conducted a comprehensive literature review, synthesizing the findings from dozens of real-world, empirical studies on the topic. Then, they analyzed this collective evidence through a specific lens called neo-institutional theory. Host: That sounds a bit academic. Can you break that down for us? Expert: Absolutely. In simple terms, it's a way of understanding how organizations respond to external pressures—from society, from regulators—to appear legitimate. Sometimes, this means they adopt policies for show, even if their internal day-to-day work doesn't change. Host: That makes sense. It’s about looking the part. So, using that lens, what were the most significant findings from the study? Expert: There were three that really stood out. First, there's a fundamental tension at play. On one side, you have pressures pushing for responsible AI, like legal compliance and protecting the company's reputation. On the other, you have inhibitors, like market demand for AI that just *works*, regardless of ethics, and a lack of real accountability. Host: And this tension leads to problems? Expert: Exactly. It leads to something the study calls 'decoupling'. The most common form is 'policy-practice decoupling'. This is when a company adopts a great-sounding ethics policy, but the engineering teams on the ground never actually implement it. Expert: The second, more subtle form is 'means-end decoupling'. This is when teams *do* implement a practice, like a bias check, but it's done in a superficial way that doesn't actually achieve the ethical goal. It's essentially just ticking a box. Host: So there's a disconnect. What was the second key finding? Expert: It’s about the people on the ground: the AI practitioners. The study found they fall into two distinct roles. They are either 'institutional custodians' or 'institutional entrepreneurs'. Expert: 'Custodians' are those who resist change to protect existing practices. Think of a product manager who argues that ethical considerations slow down development and hurt performance. They maintain the status quo. Expert: 'Entrepreneurs', on the other hand, are the champions. They are the ones who passionately advocate for responsible AI, often taking it on themselves without a formal mandate because they believe it's the right thing to do. Host: Which leads us to the third point, which I imagine is that these champions can't do it alone? Expert: Precisely. The study concludes that this bottom-up approach, relying on a few passionate individuals, is not enough. For responsible AI to be effective, it requires strong, top-down organizational support, clear structures, and proactive processes. Host: This is the crucial part for our listeners. For a business leader, what are the practical takeaways here? Why does this matter? Expert: First, leaders need to conduct an honest assessment. Are your responsible AI efforts real, or are they just symbolic? Creating a policy to look good, without giving your teams the time, resources, and authority to implement it, is setting them—and the company—up for failure. Host: So it's about moving beyond lip service to avoid real business risk. Expert: Exactly. Second, find and empower your 'institutional entrepreneurs'. The study shows these champions often face immense stress and burnout. So, formalize their roles. Give them authority, a budget, and a direct line to leadership. Don't let their goodwill be the only thing powering your ethics strategy. Host: And the final takeaway? Expert: Be proactive, not reactive. You can't bolt on ethics at the end. The study suggests building responsible AI structures that are both centralized and decentralized. A central team can provide resources and set standards, but you also need experts embedded *within* each development team to manage risks from the very beginning. Host: That’s incredibly clear. So, to summarize: there's a major gap between AI policy and practice, driven by competing business pressures. This results in actions that are often just for show. Host: And while passionate employees can drive change from the bottom up, they will ultimately fail without sincere, structural support from leadership. Host: Alex, thank you so much for breaking down this complex but incredibly important study for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory