Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation
Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.
Problem
Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.
Outcome
- Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences. - Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce. - Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity. - Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged. - Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once. - Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
Host: Welcome to A.I.S. Insights, the podcast where we connect big ideas with business practice. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating study from the MIS Quarterly Executive titled, "Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation".
Host: It explores how abstract ethical ideas about AI can be turned into concrete actions when a company rolls out new technology. It follows a German energy provider over 30 months as they implemented large-scale automation, providing a real-world roadmap for leaders.
Host: With me is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Many business leaders listening have heard about AI ethics, but the study suggests there's a major disconnect. What's the core problem they identified?
Expert: The problem is a classic gap between knowing *what* to do and knowing *how* to do it. Companies have access to high-level principles like fairness, transparency, and responsibility. But when it's time to automate a department's workflow, managers are often left wondering, "What does 'fairness' actually look like on a Tuesday morning for my team?"
Expert: This uncertainty creates fear and resistance among employees. They worry about their jobs, their routines get disrupted, and they often see AI as a threat. The study looked at a company, called ESP, that was facing this exact dilemma.
Host: So how did the researchers get inside this problem to understand it?
Expert: They used a longitudinal case study approach. For two and a half years, they were deeply embedded in the company. They conducted interviews, surveys, and on-site observations with everyone involved—from the back-office employees whose tasks were being automated, to the project managers, and even senior leaders and the employee works council.
Host: That deep-dive approach must have surfaced some powerful findings. What were the key takeaways?
Expert: Absolutely. The first was about responsibility. It can't be an abstract concept. At ESP, when the IT helpdesk was asked to create a user account for a bot, they initially refused, asking who would be personally responsible if it made a mistake.
Host: That's a very practical roadblock. How did the company solve it?
Expert: They had to define clear roles, creating a "bot supervisor" who was accountable for the bot's daily operations. But more importantly, they established that senior leadership, not just the tech team, had to accept ultimate responsibility for any negative outcomes.
Host: That makes sense. The study also mentions transparency. How do you make something like a software bot, which is essentially invisible, transparent to a nervous workforce?
Expert: This is one of my favorite findings. ESP set up a dedicated workstation in the middle of the office where anyone could walk by and watch the bot perform its tasks on screen. To prevent people from accidentally turning it off, they put a giant teddy bear in the chair, which they named "Robbie".
Host: A teddy bear?
Expert: Exactly. It was a simple, humanizing touch. It made the technology feel less like a mysterious, threatening force and more like a tool. It literally turned employee fear into curiosity.
Host: So it's about demystifying the technology. What about helping employees cope with the changes to their actual jobs?
Expert: The key was gradual involvement and open communication. Instead of top-down corporate announcements, they found that peer-to-peer conversations were far more effective. They created safe spaces where employees could talk to trusted colleagues who had already worked with the bots, ask honest questions, and voice their concerns without being judged.
Host: It sounds like the human element was central to this technology rollout. Alex, let’s get to the bottom line. For the business leaders listening, why does all of this matter? What are the key takeaways for them?
Expert: I think there are three critical takeaways. First, AI ethics is not a theoretical exercise; it's a core part of project risk management. Ignoring employee concerns doesn't make them go away—it just leads to resistance and potential project failure.
Expert: Second, make the invisible visible. Whether it's a teddy bear on a chair or a live dashboard, find creative ways to show employees what the AI is actually doing. A little transparency goes a long way in building trust.
Expert: And finally, involve your stakeholders from day one. That means bringing your employee representatives, your data protection officers, and your legal teams into the conversation early. In the study, the data protection officer stopped a "task mining" initiative due to privacy concerns, saving the company time and resources on a project that was a non-starter.
Host: So, it's about being proactive with responsibility, transparency, and communication.
Expert: Precisely. It’s about treating the implementation not just as a technical challenge, but as a human one.
Host: A fantastic summary of a very practical study. The message is clear: to succeed with AI automation, you have to translate ethical principles into thoughtful, tangible actions that build trust with your people.
Host: Alex Ian Sutherland, thank you for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable lessons from the intersection of business and technology.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines