AIS Logo
← Back to Library
Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Christiane Ernst
This study investigates how individuals rely on AI advice when trying to detect deepfake videos. Using a judge-advisor system, participants first made their own judgment about a video's authenticity and then were shown an AI tool's evaluation, after which they could revise their decision. The research used Qualitative Comparative Analysis to explore how factors like AI literacy, trust, and algorithm aversion influence the decision to rely on the AI's advice.

Problem Recent advancements in AI have led to the creation of hyper-realistic deepfakes, making it increasingly difficult for people to distinguish between real and manipulated media. This poses serious threats, including the rapid spread of misinformation, reputational damage, and the potential destabilization of political systems. There is a need to understand how humans interact with AI detection tools to build more effective countermeasures.

Outcome - A key finding is that participants only changed their initial decision when the AI tool indicated that a video was genuine, not when it flagged a deepfake.
- This suggests users are more likely to use AI tools to confirm authenticity rather than to reliably detect manipulation, raising concerns about unreflective acceptance of AI advice.
- Reliance on the AI's advice that a video was genuine was driven by specific combinations of factors, occurring when individuals had either high aversion to algorithms, low trust, or high AI literacy.
Deepfake, Reliance on AI Advice, Qualitative Comparative Analysis (QCA), Human-AI Collaboration