AIS Logo
← Back to Library
Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Overcoming Algorithm Aversion with Transparency: Can Transparent Predictions Change User Behavior?

Lasse Bohlen, Sven Kruschel, Julian Rosenberger, Patrick Zschech, and Mathias Kraus
This study investigates whether making a machine learning (ML) model's reasoning transparent can help overcome people's natural distrust of algorithms, known as 'algorithm aversion'. Through a user study with 280 participants, researchers examined how transparency interacts with the previously established method of allowing users to adjust an algorithm's predictions.

Problem People often hesitate to rely on algorithms for decision-making, even when the algorithms are superior to human judgment. While giving users control to adjust algorithmic outputs is known to reduce this aversion, it has been unclear whether making the algorithm's 'thinking process' transparent would also help, or perhaps even be more effective.

Outcome - Giving users the ability to adjust an algorithm's predictions significantly reduces their reluctance to use it, confirming findings from previous research.
- In contrast, simply making the algorithm transparent by showing its decision logic did not have a statistically significant effect on users' willingness to choose the model.
- The ability to adjust the model's output (adjustability) appears to be a more powerful tool for encouraging algorithm adoption than transparency alone.
- The effects of transparency and adjustability were found to be largely independent of each other, rather than having a combined synergistic effect.
Algorithm Aversion, Adjustability, Transparency, Interpretable Machine Learning, Replication Study