AIS Logo
← Back to Library
Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.

Problem As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.

Outcome - Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs.
- Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority.
- The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust.
- Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Artificial Intelligence (AI), Responsibility Gap, Responsibility in Human-AI collaboration, Decision-Making, Sociomateriality, Affective Agency