AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
Understanding the Ethics of Generative AI: Established and New Ethical Principles

Understanding the Ethics of Generative AI: Established and New Ethical Principles

Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.

Problem The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.

Outcome - Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI.
- Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design.
- Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems.
- The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Ashneet Kaur, Sudhanshu Maheshwari, Indranil Bose, Simarjeet Singh
This study conducts a systematic literature review to comprehensively explore the implications of Artificial Intelligence (AI) on employee privacy. It utilizes the privacy calculus framework to analyze the trade-offs organizations and employees face when integrating AI technologies in the workplace. The research evaluates how different types of AI technologies compromise or safeguard privacy and discusses their varying impacts.

Problem The rapid and pervasive adoption of AI in the workplace has enhanced efficiency but also raised significant concerns regarding employee privacy. There is a research gap in holistically understanding the broad implications of advancing AI technologies on employee privacy, as previous studies often focus on narrow applications without a comprehensive theoretical framework.

Outcome - The integration of AI in the workplace presents a trade-off, offering benefits like objective performance evaluation while posing significant risks such as over-surveillance and erosion of trust.
- The study categorizes AI into four advancing types (descriptive, predictive, prescriptive, and autonomous), each progressively increasing the complexity of privacy challenges and altering the employee privacy calculus.
- As AI algorithms become more advanced and opaque, it becomes more difficult for employees to understand how their data is used, leading to feelings of powerlessness and potential resistance.
- The paper identifies a significant lack of empirical research specifically on AI's impact on employee privacy, as opposed to the more widely studied area of consumer privacy.
- To mitigate privacy risks, the study recommends practical strategies for organizations, including transparent communication about data practices, involving employees in AI system design, and implementing strong ethical AI frameworks.
Artificial Intelligence, Employee Privacy, Privacy Calculus, Systematic Review, Workplace Surveillance, AI Ethics
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

Abhinav Shekhar, Rakesh Gupta, Sujeet Kumar Sharma
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.

Problem Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.

Outcome - Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias.
- Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process.
- Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation.
- Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.

Problem Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.

Outcome - A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation.
- Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals).
- AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI.
- The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Building an Artificial Intelligence Explanation Capability

Building an Artificial Intelligence Explanation Capability

Ida Someh, Barbara H. Wixom, Cynthia M. Beath, Angela Zutavern
This study introduces the concept of an "AI Explanation Capability" (AIX) that companies must develop to successfully implement artificial intelligence. Using case studies from the Australian Taxation Office and General Electric, the paper outlines a framework with four key dimensions (decision tracing, bias remediation, boundary setting, and value formulation) to help organizations address the inherent challenges of AI.

Problem Businesses are increasingly adopting AI but struggle with its distinctive challenges, particularly the "black-box" nature of complex models. This opacity makes it difficult to trust AI, manage risks like algorithmic bias, prevent unintended negative consequences, and prove the technology's business value, ultimately hindering widespread and successful deployment.

Outcome - AI projects present four unique challenges: Model Opacity (the inability to understand a model's inner workings), Model Drift (degrading performance over time), Mindless Actions (acting without context), and the Unproven Nature of AI (difficulty in demonstrating value).
- To overcome these challenges, organizations must build a new organizational competency called an AI Explanation Capability (AIX).
- The AIX capability is comprised of four dimensions: Decision Tracing (making models understandable), Bias Remediation (identifying and fixing unfairness), Boundary Setting (defining safe operating limits for AI), and Value Formulation (articulating and measuring the business value of AI).
- Building this capability requires a company-wide effort, involving domain experts and business leaders alongside data scientists to ensure AI is deployed safely, ethically, and effectively.
AI explanation, explainable AI, AIX capability, model opacity, model drift, AI governance, bias remediation
A Narrative Exploration of the Immersive Workspace 2040

A Narrative Exploration of the Immersive Workspace 2040

Alexander Richter, Shahper Richter, Nastaran Mohammadhossein
This study explores the future of work in the public sector by developing a speculative narrative, 'Immersive Workspace 2040.' Created through a structured methodology in collaboration with a New Zealand government ministry, the paper uses this narrative to make abstract technological trends tangible and analyze their deep structural implications.

Problem Public sector organizations face significant challenges adapting to disruptive digital innovations like AI due to traditionally rigid workforce structures and planning models. This study addresses the need for government leaders to move beyond incremental improvements and develop a forward-looking vision to prepare their workforce for profound, nonlinear changes.

Outcome - A major transformation will be the shift from fixed jobs to a 'Dynamic Talent Orchestration System,' where AI orchestrates teams based on verifiable skills for specific projects, fundamentally changing career paths and HR systems.
- The study identifies a 'Human-AI Governance Paradox,' where technologies designed to augment human intellect can also erode human agency and authority, necessitating safeguards like tiered autonomy frameworks to ensure accountability remains with humans.
- Unlike the private sector's focus on efficiency, public sector AI must be designed for value alignment, embedding principles like equity, fairness, and transparency directly into its operational logic to maintain public trust.
Future of Work, Immersive Workspace, Human-AI Collaboration, Public Sector Transformation, Narrative Foresight, AI Governance, Digital Transformation
Bias Measurement in Chat-optimized LLM Models for Spanish and English

Bias Measurement in Chat-optimized LLM Models for Spanish and English

Ligia Amparo Vergara Brunal, Diana Hristova, and Markus Schaal
This study develops and applies a method to evaluate social biases in advanced AI language models (LLMs) for both English and Spanish. Researchers tested three state-of-the-art models on two datasets designed to expose stereotypical thinking, comparing performance across languages and contexts.

Problem As AI language models are increasingly used for critical decisions in areas like healthcare and human resources, there's a risk they could spread harmful social biases. While bias in English AI has been extensively studied, there is a significant lack of research on how these biases manifest in other widely spoken languages, such as Spanish.

Outcome - Models were generally worse at identifying and refusing to answer biased questions in Spanish compared to English.
- However, when the models did provide an answer to a biased prompt, their responses were often fairer (less stereotypical) in Spanish.
- Models provided fairer answers when the questions were direct and unambiguous, as opposed to indirect or vague.
LLM, bias, multilingual, Spanish, AI ethics, fairness
Algorithmic Management: An MCDA-Based Comparison of Key Approaches

Algorithmic Management: An MCDA-Based Comparison of Key Approaches

Arne Jeppe, Tim Brée, and Erik Karger
This study employs Multi-Criteria Decision Analysis (MCDA) to evaluate and compare four distinct approaches for governing algorithmic management systems: principle-based, rule-based, risk-based, and auditing-based. The research gathered preferences from 27 experts regarding each approach's effectiveness, feasibility, adaptability, and stakeholder acceptability to determine the most preferred strategy.

Problem As organizations increasingly use algorithms to manage workers, they face the challenge of governing these systems to ensure fairness, transparency, and accountability. While several governance models have been proposed conceptually, there is a significant research gap regarding which approach is empirically preferred by experts and most practical for balancing innovation with responsible implementation.

Outcome - Experts consistently and strongly preferred a hybrid, risk-based approach for governing algorithmic management systems.
- This approach was perceived as the most effective in mitigating risks (like bias and privacy violations) while also demonstrating good adaptability to new technologies and high stakeholder acceptability.
- The findings suggest that a 'one-size-fits-all' strategy is ineffective; instead, a pragmatic approach that tailors the intensity of governance to the level of potential harm is most suitable.
- Purely rule-based approaches were seen as too rigid and slow to adapt, while purely principle-based approaches were considered difficult to enforce.
Algorithmic Management, Multi-Criteria Decision Analysis (MCDA), Risk Management, Organizational Control, Governance, AI Ethics
AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

AI Agents as Governance Actors in Data Trusts – A Normative and Design Framework

Arnold F. Arz von Straussenburg, Jens J. Marga, Timon T. Aldenhoff, and Dennis M. Riehle
This study proposes a design theory to safely and ethically integrate Artificial Intelligence (AI) agents into the governance of data trusts. The paper introduces a normative framework that unifies fiduciary principles, institutional trust, and AI ethics. It puts forward four specific design principles to guide the development of AI systems that can act as responsible governance actors within these trusts, ensuring they protect beneficiaries' interests.

Problem Data trusts are frameworks for responsible data management, but integrating powerful AI systems creates significant ethical and security challenges. AI can be opaque and may have goals that conflict with the interests of data owners, undermining the fairness and accountability that data trusts are designed to protect. This creates a critical need for a governance model that allows organizations to leverage AI's benefits without compromising their fundamental duties to data owners.

Outcome - The paper establishes a framework to guide the integration of AI into data trusts, ensuring AI actions align with ethical and fiduciary responsibilities.
- It introduces four key design principles for AI agents: 1) Fiduciary alignment to prioritize beneficiary interests, 2) Accountability through complete traceability and oversight, 3) Transparent explainability for all AI decisions, and 4) Autonomy-preserving oversight to maintain robust human supervision.
- The research demonstrates that AI can enhance efficiency in data governance without eroding stakeholder trust or ethical standards if implemented correctly.
- It provides actionable recommendations, such as automated audits and dynamic consent mechanisms, to ensure the responsible use of AI within data ecosystems for the common good.
Data Trusts, Normative Framework, AI Governance, Fairness, AI Agents
Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Gender Bias in LLMs for Digital Innovation: Disparities and Fairness Concerns

Sumin Kim-Andres¹ and Steffi Haag¹
This study investigates gender bias in large language models (LLMs) like ChatGPT within the context of digital innovation and entrepreneurship. Using two tasks—associating gendered terms with professions and simulating venture capital funding decisions—the researchers analyzed ChatGPT-4o's outputs to identify how societal gender biases are reflected and reinforced by AI.

Problem As businesses increasingly integrate AI tools for tasks like brainstorming, hiring, and decision-making, there's a significant risk that these systems could perpetuate harmful gender stereotypes. This can create disadvantages for female entrepreneurs and innovators, potentially widening the existing gender gap in technology and business leadership.

Outcome - ChatGPT-4o associated male-denoting terms with digital innovation and tech-related professions significantly more often than female-denoting terms.
- In simulated venture capital scenarios, the AI model exhibited 'in-group bias,' predicting that both male and female venture capitalists would be more likely to fund entrepreneurs of their own gender.
- The study confirmed that LLMs can perpetuate gender bias through implicit cues like names alone, even when no explicit gender information is provided.
- The findings highlight the risk of AI reinforcing stereotypes in professional decision-making, which can limit opportunities for underrepresented groups in business and innovation.
Gender Bias, Large Language Models, Fairness, Digital Innovation, Artificial Intelligence
A Survey on Citizens' Perceptions of Social Risks in Smart Cities

A Survey on Citizens' Perceptions of Social Risks in Smart Cities

Elena Fantino, Sebastian Lins, and Ali Sunyaev
This study identifies 15 key social risks associated with the development of smart cities, such as privacy violations and increased surveillance. It then examines public perception of these risks through a quantitative survey of 310 participants in Germany and Italy. The research aims to understand how citizens view the balance between the benefits and potential harms of smart city technologies.

Problem While the digital transformation of cities promises benefits like enhanced efficiency and quality of life, it often overlooks significant social risks. Issues like data privacy, cybersecurity threats, and growing social divides can undermine human security and well-being, yet citizens' perspectives on these dangers are frequently ignored in the planning and implementation process.

Outcome - Citizens rate both the probability and severity of social risks in smart cities as relatively high.
- Despite recognizing these significant risks, participants generally maintain a positive attitude towards the concept of smart cities, highlighting a duality in public perception.
- The risk perceived as most probable by citizens is 'profiling', while 'cybersecurity threats' are seen as having the most severe impact.
- Risk perception differs based on demographic factors like age and nationality; for instance, older participants and Italian citizens reported higher risk perceptions than their younger and German counterparts.
- The findings underscore the necessity of a participatory and ethical approach to smart city development that actively involves citizens to mitigate risks and ensure equitable benefits.
smart cities, social risks, citizens' perception, AI ethics, social impact
Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Christopher Diebel, Akylzhan Kassymova, Mari-Klara Stein, Martin Adam, and Alexander Benlian
This study investigates how employees perceive the fairness of decisions that involve artificial intelligence (AI). Using an online experiment with 79 participants, researchers compared scenarios where a performance evaluation was conducted by a manager alone, fully delegated to an AI, or made by a manager and an AI working together as an 'ensemble'.

Problem As companies increasingly use AI for important workplace decisions like hiring and performance reviews, it's crucial to understand how employees react. Prior research suggests that AI-driven decisions can be perceived as unfair, but it was unclear how different methods of AI integration—specifically, fully handing over a decision to AI versus a collaborative human-AI approach—affect employee perceptions of fairness and their trust in management.

Outcome - Decisions fully delegated to an AI are perceived as significantly less fair than decisions made solely by a human manager.
- This perceived unfairness in AI-delegated decisions leads to a lower level of trust in the manager who made the delegation.
- Importantly, these negative effects on fairness and trust do not occur when a human-AI 'ensemble' method is used, where both the manager and the AI are equally involved in the decision-making process.
Decision-Making, Al Systems, Procedural Fairness, Ensemble, Delegation
How Boards of Directors Govern Artificial Intelligence

How Boards of Directors Govern Artificial Intelligence

Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.

Problem Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.

Outcome - Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence.
- Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings.
- Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions.
- To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges.
- Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
How HireVue Created

How HireVue Created "Glass Box" Transparency for its AI Application

Monideepa Tarafdar, Irina Rets, Lindsey Zuloaga, Nathan Mondragon
This paper presents a case study on HireVue, a company that provides an AI application for assessing job interviews. It describes the transparency-related challenges HireVue faced and explains how it addressed them by developing a "glass box" approach, which focuses on making the entire system of AI development and deployment understandable, rather than just the technical algorithm.

Problem AI applications used for critical decisions, such as hiring, are often perceived as technical "black boxes." This lack of clarity creates significant challenges for businesses in trusting the technology, ensuring fairness, mitigating bias, and complying with regulations, which hinders the responsible adoption of AI in recruitment.

Outcome - The study introduces a "glass box" model for AI transparency, which shifts focus from the technical algorithm to the broader sociotechnical system, including design processes, client interactions, and organizational functions.
- HireVue implemented five types of transparency practices: pre-deployment client-focused, internal, post-deployment client-focused, knowledge-related, and audit-related.
- This multi-faceted approach helps build trust with clients, regulators, and applicants by providing clarity on the AI's application, limitations, and validation processes.
- The findings serve as a practical guide for other AI software companies on how to create effective and comprehensive transparency for their own applications, especially in high-stakes fields.
AI transparency, algorithmic hiring, glass box model, ethical AI, recruitment technology, HireVue, case study
The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

The Danish Business Authority's Approach to the Ongoing Evaluation of Al Systems

Oliver Krancher, Per Rådberg Nagbøl, Oliver Müller
This study examines the strategies employed by the Danish Business Authority (DBA), a pioneering public-sector adopter of AI, for the continuous evaluation of its AI systems. Through a case study of the DBA's practices and their custom X-RAI framework, the paper provides actionable recommendations for other organizations on how to manage AI systems responsibly after deployment.

Problem AI systems can degrade in performance over time, a phenomenon known as model drift, leading to inaccurate or biased decisions. Many organizations lack established procedures for the ongoing monitoring and evaluation of AI systems post-deployment, creating risks of operational failures, financial losses, and non-compliance with regulations like the EU AI Act.

Outcome - Organizations need a multi-faceted approach to AI evaluation, as single strategies like human oversight or periodic audits are insufficient on their own.
- The study presents the DBA's three-stage evaluation process: pre-production planning, in-production monitoring, and formal post-implementation evaluations.
- A key strategy is 'enveloping' AI systems and their evaluations, which means setting clear, pre-defined boundaries for the system's use and how it will be monitored to prevent misuse and ensure accountability.
- The DBA uses an MLOps platform and an 'X-RAI' (Transparent, Explainable, Responsible, Accurate AI) framework to ensure traceability, automate deployments, and guide risk assessments.
- Formal evaluations should use deliberate sampling, including random and negative cases, and 'blind' reviews (where caseworkers assess a case without seeing the AI's prediction) to mitigate human and machine bias.
AI evaluation, AI governance, model drift, responsible AI, MLOps, public sector AI, case study
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.

Problem Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.

Outcome - The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact.
- It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity.
- The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders.
- It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework
How to Operationalize Responsible Use of Artificial Intelligence

How to Operationalize Responsible Use of Artificial Intelligence

Lorenn P. Ruster, Katherine A. Daniell
This study outlines a practical five-phase process for organizations to translate responsible AI principles into concrete business practices. Based on participatory action research with two startups, the paper provides a roadmap for crafting specific responsibility pledges and embedding them into organizational processes, moving beyond abstract ethical statements.

Problem Many organizations are committed to the responsible use of AI but struggle with how to implement it practically, creating a significant "principle-to-practice gap". This confusion can lead to inaction or superficial efforts known as "ethics-washing," where companies appear ethical without making substantive changes. The study addresses the lack of clear, actionable guidance for businesses, especially smaller ones, on where to begin.

Outcome - Presents a five-phase process for operationalizing responsible AI: 1) Buy-in, 2) Intuition-building, 3) Pledge-crafting, 4) Pledge-communicating, and 5) Pledge-embedding.
- Argues that responsible AI should be approached as a systems problem, considering organizational mindsets, culture, and processes, not just technical fixes.
- Recommends that organizations create contextualized, action-oriented "pledges" rather than simply adopting generic AI principles.
- Finds that investing in responsible AI practices early, even in small projects, helps build organizational capability and transfers to future endeavors.
- Provides a framework for businesses to navigate communication challenges, balancing transparency with commercial interests to build user trust.
Responsible AI, AI Ethics, Operationalization, Systems Thinking, AI Governance, Pledge-making, Startups
Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation

Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation

Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.

Problem Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.

Outcome - Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences.
- Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce.
- Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity.
- Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged.
- Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once.
- Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines
Showing all 18 podcasts