AIS Logo
← Back to Library
How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

How Stakeholders Operationalize Responsible AI in Data-Sensitive Contexts

Shivaang Sharma, Angela Aristidou
This study investigates the challenges of implementing responsible AI in complex, multi-stakeholder environments such as humanitarian crises. Researchers analyzed the deployment of six AI tools, identifying significant gaps in expectations and values among developers, aid agencies, and affected populations. Based on these findings, the paper introduces the concept of "AI Responsibility Rifts" (AIRRs) and proposes the SHARE framework to help organizations navigate these disagreements.

Problem Traditional approaches to AI safety focus on objective, technical risks like hallucinations or data bias. This perspective is insufficient for data-sensitive contexts because it overlooks the subjective disagreements among diverse stakeholders about an AI tool's purpose, impact, and ethical boundaries. These unresolved conflicts, or "rifts," can hinder the adoption of valuable AI tools and lead to unintended negative consequences for vulnerable populations.

Outcome - The study introduces the concept of "AI Responsibility Rifts" (AIRRs), defined as misalignments in stakeholders' subjective expectations, values, and perceptions of an AI system's impact.
- It identifies five key areas where these rifts occur: Safety, Humanity, Accountability, Reliability, and Equity.
- The paper proposes the SHARE framework, a self-diagnostic questionnaire designed to help organizations identify and address these rifts among their stakeholders.
- It provides core recommendations and caveats for executives to close the gaps in each of the five rift areas, promoting a more inclusive and effective approach to responsible AI.
Responsible AI, AI ethics, stakeholder management, humanitarian AI, AI governance, data-sensitive contexts, SHARE framework