Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective
Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.
Problem
As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.
Outcome
- Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs. - Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority. - The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust. - Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In a world where artificial intelligence is becoming a key player in corporate decision-making, who is truly responsible when things go wrong? Today we're diving into a fascinating new study titled "Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective."
Host: It investigates how responsibility is understood and assigned when AI systems influence our choices, and how human oversight and even our emotional engagement with technology can shape accountability. Here to break it all down for us is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, let's start with the core issue this study addresses: the 'responsibility gap'. It sounds important, but what does it mean in the real world for businesses?
Expert: It's one of the biggest challenges facing organizations today. As AI becomes more autonomous in fields from finance to healthcare, it gets incredibly difficult to pinpoint who is accountable for a bad outcome. Is it the developer who wrote the code? The manager who used the AI's recommendation? The company that deployed it? Responsibility gets diffused across so many people and systems that it can feel like no one is truly in charge.
Host: A 'many-hands' problem, as the researchers call it. It sounds like a legal and ethical minefield. So, how did the study approach this complex topic?
Expert: They went straight to the source. The researchers conducted in-depth interviews with twenty professionals across various sectors—automotive, healthcare, IT—people who are actively working with AI systems every day. They wanted to understand the real-world experiences and feelings of those on the front lines of this technological shift.
Host: So, based on those real-world conversations, what did they find? I think many assume that AI might reduce our sense of responsibility, letting us off the hook.
Expert: That's the common assumption, but the study found the exact opposite. Far from diminishing responsibility, using AI actually seems to intensify it. Professionals reported a greater awareness of the need to validate and interpret AI outputs. They know they can't just say, "The AI told me to do it." Their personal accountability actually grows.
Host: That's counterintuitive. So if the AI isn't the one in charge, how do these professionals view its role in their work?
Expert: Most see AI as a supportive tool, not an autonomous boss. A recurring image from the interviews was that of a 'sparring partner' or a 'second opinion'. It’s a powerful assistant for analyzing data or generating ideas, but the final authority, the final decision, always rests with the human user.
Host: And what about the 'black box' nature of some AI? The fact that we don't always know how it reaches its conclusions. Does that lead to people trusting it blindly?
Expert: No, and this was another surprising finding. That very uncertainty often encourages users to be more cautious and critical. The study found that because professionals understand the potential for AI errors and don't always see the logic, it spurs them to double-check the results. This critical mindset actually helps to bridge the responsibility gap, rather than widen it.
Host: This is incredibly insightful. So, Alex, let's get to the most important question for our audience. What are the key business takeaways here? What should a leader listening right now do with this information?
Expert: There are three critical takeaways. First, you cannot use AI as a scapegoat. The study makes it clear that responsibility remains anchored in human oversight. Leaders must build a culture where employees are expected and empowered to question, verify, and even override AI suggestions.
Host: Okay, so accountability culture is number one. What’s next?
Expert: Second, define roles with absolute clarity. Your teams need to understand the AI's function. Is it an analyst, an advisor, a co-pilot? The 'sparring partner' model seems to be a very effective framework. Make it clear that while the tool is powerful, the final judgment—and the responsibility that comes with it—belongs to your people.
Host: That makes sense. And the third takeaway?
Expert: Finally, rethink your AI training. It’s not just about teaching people which buttons to press. The real need is to develop critical thinking skills for a hybrid human-AI environment. The study suggests that employees need to be more aware of their own feelings—like over-trust or skepticism—towards the AI and use that awareness to make better judgments.
Host: So, to summarize: AI doesn't erase responsibility, it heightens it. We should treat it as a 'sparring partner', not a boss. And its very opaqueness can be a strength if it encourages a more critical, human-in-the-loop approach.
Expert: Exactly. It's about augmenting human intelligence, not replacing human accountability.
Host: Alex Ian Sutherland, thank you so much for these powerful insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the intersection of business and technology.