AIS Logo
← Back to Library
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework