AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

A Multi-Level Strategy for Deepfake Content Moderation under EU Regulation

Luca Deck, Max-Paul Förster, Raimund Weidlich, and Niklas Kühl
This study reviews existing methods for marking, detecting, and labeling deepfakes to assess their effectiveness under new EU regulations. Based on a multivocal literature review, the paper finds that individual methods are insufficient. Consequently, it proposes a novel multi-level strategy that combines the strengths of existing approaches for more scalable and practical content moderation on online platforms.

Problem The increasing availability of deepfake technology poses a significant risk to democratic societies by enabling the spread of political disinformation. While the European Union has enacted regulations to enforce transparency, there is a lack of effective industry standards for implementation. This makes it challenging for online platforms to moderate deepfake content at scale, as current individual methods fail to meet regulatory and practical requirements.

Outcome - Individual methods for marking, detecting, and labeling deepfakes are insufficient to meet EU regulatory and practical requirements alone.
- The study proposes a multi-level strategy that combines the strengths of various methods (e.g., technical detection, trusted sources) to create a more robust and effective moderation process.
- A simple scoring mechanism is introduced to ensure the strategy is scalable and practical for online platforms managing massive amounts of content.
- The proposed framework is designed to be adaptable to new types of deepfake technology and allows for context-specific risk assessment, such as for political communication.
Deepfakes, EU Regulation, Online Platforms, Content Moderation, Political Communication
Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Ensembling vs. Delegating: Different Types of AI-Involved Decision-Making and Their Effects on Procedural Fairness Perceptions

Christopher Diebel, Akylzhan Kassymova, Mari-Klara Stein, Martin Adam, and Alexander Benlian
This study investigates how employees perceive the fairness of decisions that involve artificial intelligence (AI). Using an online experiment with 79 participants, researchers compared scenarios where a performance evaluation was conducted by a manager alone, fully delegated to an AI, or made by a manager and an AI working together as an 'ensemble'.

Problem As companies increasingly use AI for important workplace decisions like hiring and performance reviews, it's crucial to understand how employees react. Prior research suggests that AI-driven decisions can be perceived as unfair, but it was unclear how different methods of AI integration—specifically, fully handing over a decision to AI versus a collaborative human-AI approach—affect employee perceptions of fairness and their trust in management.

Outcome - Decisions fully delegated to an AI are perceived as significantly less fair than decisions made solely by a human manager.
- This perceived unfairness in AI-delegated decisions leads to a lower level of trust in the manager who made the delegation.
- Importantly, these negative effects on fairness and trust do not occur when a human-AI 'ensemble' method is used, where both the manager and the AI are equally involved in the decision-making process.
Decision-Making, Al Systems, Procedural Fairness, Ensemble, Delegation
The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions

The Value of Blockchain-Verified Micro-Credentials in Hiring Decisions

Lyuba Stafyeyeva
This study investigates how blockchain verification and the type of credential-issuing institution (university vs. learning academy) influence employer perceptions of a job applicant's trustworthiness, expertise, and salary expectations. Using an experimental design with 200 participants, the research evaluated how different credential formats affected hiring assessments.

Problem Verifying academic credentials is often slow, expensive, and prone to fraud, undermining trust in the system. While new micro-credentials (MCs) offer an alternative, their credibility is often unclear to employers, and it is unknown if technologies like blockchain can effectively solve this trust issue in real-world hiring scenarios.

Outcome - Blockchain verification did not significantly increase employers' perceptions of an applicant's trustworthiness or expertise.
- Employers showed no significant preference for credentials issued by traditional universities over those from alternative learning academies, suggesting a shift toward competency-based hiring.
- Applicants with blockchain-verified credentials were offered lower minimum starting salaries, indicating that while verification may reduce hiring risk for employers, it does not increase the candidate's perceived value.
- The results suggest that institutional prestige is becoming less important than verifiable skills in the hiring process.
micro-credentials, blockchain, trust, verification, employer decision-making
Design Principles for SME-focused Maturity Models in Information Systems

Design Principles for SME-focused Maturity Models in Information Systems

Stefan Rösl, Daniel Schallmo, and Christian Schieder
This study addresses the limited practical application of maturity models (MMs) among small and medium-sized enterprises (SMEs). Through a structured analysis of 28 relevant academic articles, the researchers developed ten actionable design principles (DPs) to improve the usability and strategic impact of MMs for SMEs. These principles were subsequently validated by 18 recognized experts to ensure their practical relevance.

Problem Maturity models are valuable tools for assessing organizational capabilities, but existing frameworks are often too complex, resource-intensive, and not tailored to the specific constraints of SMEs. This misalignment leads to low adoption rates, preventing smaller businesses from effectively using these models to guide their transformation and innovation efforts.

Outcome - The study developed and validated ten actionable design principles (DPs) for creating maturity models specifically tailored for Small and Medium-sized Enterprises (SMEs).
- These principles, confirmed by experts as highly useful, provide a structured foundation for researchers and designers to build MMs that are more accessible, relevant, and usable for SMEs.
- The research bridges the gap between MM theory and real-world applicability, enabling the development of tools that better support SMEs in strategic planning and capability improvement.
Design Principles, Maturity Model, Capability Assessment, SME, Information Systems, SME-specific MMs
Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain

Evaluating Consumer Decision-Making Trade-Offs in Smart Service Systems in the Smart Home Domain

Björn Konopka and Manuel Wiesche
This study investigates the trade-offs consumers make when purchasing smart home devices. Using a choice-based conjoint analysis, the research evaluates the relative importance of eight attributes related to performance (e.g., reliability), privacy (e.g., data storage), and market factors (e.g., price and provider).

Problem While smart home technology is increasingly popular, there is limited understanding of how consumers weigh different factors, particularly how they balance privacy concerns against product performance and cost. This study addresses this gap by quantifying which features consumers prioritize when making purchasing decisions for smart home systems.

Outcome - Reliability and the device provider are the most influential factors in consumer decision-making, significantly outweighing other attributes.
- Price and privacy-related attributes (such as data collection scope, purpose, and user controls) play a comparatively lesser role.
- Consumers strongly prefer products that are reliable and made by a trusted (in this case, domestic) provider.
- The findings indicate that consumers are willing to trade off privacy concerns for tangible benefits in performance and trust in the manufacturer.
Smart Service Systems, Smart Home, Conjoint, Consumer Preferences, Privacy
LLMs for Intelligent Automation - Insights from a Systematic Literature Review

LLMs for Intelligent Automation - Insights from a Systematic Literature Review

David Sonnabend, Mahei Manhai Li and Christoph Peters
This study conducts a systematic literature review to examine how Large Language Models (LLMs) can enhance Intelligent Automation (IA). The research aims to overcome the limitations of traditional Robotic Process Automation (RPA), such as handling unstructured data and workflow changes, by systematically investigating the integration of LLMs.

Problem Traditional Robotic Process Automation (RPA) struggles with complex tasks involving unstructured data and dynamic workflows. While Large Language Models (LLMs) show promise in addressing these issues, there has been no systematic investigation into how they can specifically advance the field of Intelligent Automation (IA), creating a significant research gap.

Outcome - LLMs are primarily used to process complex inputs, such as unstructured text, within automation workflows.
- They are leveraged to generate automation workflows directly from natural language commands, simplifying the creation process.
- LLMs are also used to guide goal-oriented Graphical User Interface (GUI) navigation, making automation more adaptable to interface changes.
- A key research gap was identified in the lack of systems that combine these different capabilities and enable continuous learning at runtime.
Large Language Models (LLMs), Intelligent Process Automation (IPA), Intelligent Automation (IA), Cognitive Automation (CA), Tool Learning, Systematic Literature Review, Robotic Process Automation (RPA)
Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Label Error Detection in Defect Classification using Area Under the Margin (AUM) Ranking on Tabular Data

Pavlos Rath-Manakidis, Kathrin Nauth, Henry Huick, Miriam Fee Unger, Felix Hoenig, Jens Poeppelbuss, and Laurenz Wiskott
This study introduces an efficient method using Area Under the Margin (AUM) ranking with gradient-boosted decision trees to detect labeling errors in tabular data. The approach is designed to improve data quality for machine learning models used in industrial quality control, specifically for flat steel defect classification. The method's effectiveness is validated on both public and real-world industrial datasets, demonstrating it can identify problematic labels in a single training run.

Problem Automated surface inspection systems in manufacturing rely on machine learning models trained on large datasets. The performance of these models is highly dependent on the quality of the data labels, but errors frequently occur due to annotator mistakes or ambiguous defect definitions. Existing methods for finding these label errors are often computationally expensive and not optimized for the tabular data formats common in industrial applications.

Outcome - The proposed AUM method is as effective as more complex, computationally expensive techniques for detecting label errors but requires only a single model training run.
- The method successfully identifies both synthetically created and real-world label errors in industrial datasets related to steel defect classification.
- Integrating this method into quality control workflows significantly reduces the manual effort required to find and correct mislabeled data, improving the overall quality of training datasets and subsequent model performance.
- In a real-world test, the method flagged suspicious samples for expert review, where 42% were confirmed to be labeling errors.
Label Error Detection, Automated Surface Inspection System (ASIS), Machine Learning, Gradient Boosting, Data-centric AI
Taking a Sociotechnical Perspective on Self-Sovereign Identity – A Systematic Literature Review

Taking a Sociotechnical Perspective on Self-Sovereign Identity – A Systematic Literature Review

Lukas Florian Bossler, Teresa Huber, and Julia Kroenung
This study provides a comprehensive analysis of academic literature on Self-Sovereign Identity (SSI), a system that aims to give individuals control over their digital data. Through a systematic literature review, the paper identifies and categorizes the key sociotechnical challenges—both technical and social—that affect the implementation and widespread adoption of SSI. The goal is to map the current research landscape and highlight underexplored areas.

Problem As individuals use more internet services, they lose control over their personal data, which is often managed and monetized by large tech companies. While Self-Sovereign Identity (SSI) is a promising solution to restore user control, academic research has disproportionately focused on technical aspects like security. This has created a significant knowledge gap regarding the crucial social challenges, such as user acceptance, trust, and usability, which are vital for SSI's real-world success.

Outcome - Security and privacy are the most frequently discussed challenges in SSI literature, often linked to the use of blockchain technology.
- Social factors essential for adoption, including user acceptance, trust, usability, and control, are significantly overlooked in current academic research.
- Over half of the analyzed papers discuss SSI in a general sense, with a lack of focus on specific application domains like e-government, healthcare, or finance.
- A potential mismatch exists between SSI's privacy needs and the inherent properties of blockchain, suggesting that alternative technologies should be explored.
- The paper concludes there is a strong need for more domain-specific and design-oriented research to address the social hurdles of SSI adoption.
self-sovereign identity, decentralized identity, blockchain, sociotechnical challenges, digital identity, systematic literature review
Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Measuring AI Literacy of Future Knowledge Workers: A Mediated Model of AI Experience and AI Knowledge

Sarah Hönigsberg, Sabrine Mallek, Laura Watkowski, and Pauline Weritz
This study investigates how future professionals develop AI literacy, which is the ability to effectively use and understand AI tools. Using a survey of 352 business school students, the researchers examined how hands-on experience with AI (both using and designing it) and theoretical knowledge about AI work together to build overall proficiency. The research proposes a new model showing that knowledge acts as a critical bridge between simply using AI and truly understanding it.

Problem As AI becomes a standard tool in professional settings, simply knowing how to use it isn't enough; professionals need a deeper understanding, or "AI literacy," to use it effectively and responsibly. The study addresses the problem that current frameworks for teaching AI skills often overlook the specific needs of knowledge workers and don't clarify how hands-on experience translates into true competence. This gap makes it difficult for companies and universities to design effective training programs to prepare the future workforce.

Outcome - Hands-on experience with AI is crucial, but it doesn't directly create AI proficiency; instead, it serves to build a foundation of AI knowledge.
- This structured AI knowledge is the critical bridge that turns practical experience into true AI literacy, allowing individuals to critique and apply AI insights effectively.
- Experience in designing or configuring AI systems has a significantly stronger positive impact on developing AI literacy than just using AI tools.
- The findings suggest that education and corporate training should combine practical, hands-on projects with structured learning about how AI works to build a truly AI-literate workforce.
knowledge worker, Al literacy, digital intelligence, digital literacy, AI knowledge
Mapping Digitalization in the Crafts Industry: A Systematic Literature Review

Mapping Digitalization in the Crafts Industry: A Systematic Literature Review

Pauline Désirée Gantzer, Audris Pulanco Umel, and Christoph Lattemann
This study challenges the perception that the craft industry lags in digital transformation by conducting a systematic literature review of 141 scientific and practitioner papers. It aims to map the application and influence of specific digital technologies across various craft sectors. The findings are used to identify patterns of adoption, highlight gaps, and recommend future research directions.

Problem The craft and skilled trades industry, despite its significant economic and cultural role, is often perceived as traditional and slow to adopt digital technologies. This view suggests the sector is missing out on crucial business opportunities and innovations, creating a knowledge gap about the actual extent and nature of digitalization within these businesses.

Outcome - The degree and type of digital technology adoption vary significantly across different craft sectors.
- Contrary to the perception of being laggards, craft businesses are actively applying a wide range of digital technologies to improve efficiency, competitiveness, and customer engagement.
- Many businesses (47.7% of cases analyzed) use digital tools primarily for value creation, such as optimizing production processes and operational efficiency.
- Sectors like construction and textiles integrate sophisticated technologies (e.g., AI, IoT, BIM), while more traditional crafts prioritize simpler tools like social media and e-commerce for marketing.
- Digital transformation in the craft industry is not a one-size-fits-all process but is shaped by sector-specific needs, resource constraints, and cultural values.
crafts, digital transformation, digitalization, skilled trades, systematic literature review
Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing

Typing Less, Saying More? – The Effects of Using Generative AI in Online Consumer Review Writing

Maximilian Habla
This study investigates how using Generative AI (GenAI) impacts the quality and informativeness of online consumer reviews. Through a scenario-based online experiment, the research compares reviews written with and without GenAI assistance, analyzing factors like the writer's cognitive load and the resulting review's detail, complexity, and sentiment.

Problem Writing detailed, informative online reviews is a mentally demanding task for consumers, which often results in less helpful content for others making purchasing decisions. While platforms use templates to help, these still require significant effort from the reviewer. This study addresses the gap in understanding whether new GenAI tools can make it easier for people to write better, more useful reviews.

Outcome - Using GenAI significantly reduces the perceived cognitive load (mental effort) for people writing reviews.
- Reviews written with the help of GenAI are more informative, covering a greater number and a wider diversity of product aspects and topics.
- GenAI-assisted reviews tend to exhibit higher linguistic complexity and express a more positive sentiment, even when the star rating given by the user is the same.
- Contrary to the initial hypothesis, the reduction in cognitive load did not directly account for the increase in review informativeness, suggesting other mechanisms are at play.
Online Reviews, Informativeness, GenAI, Cognitive Load Theory, Linguistic Complexity, Sentiment Analysis
Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Unveiling the Influence of Personality, Identity, and Organizational Culture on Generative AI Adoption in the Workplace

Dugaxhin Xhigoli
This qualitative study examines how an employee's personality, professional identity, and company culture influence their engagement with generative AI (GenAI). Through 23 expert interviews, the research explores the underlying factors that shape different AI adoption behaviors, from transparent integration to strategic concealment.

Problem As companies rapidly adopt generative AI, they encounter a wide range of employee responses, yet there is limited understanding of what drives this variation. This study addresses the research gap by investigating why employees differ in their AI usage, specifically focusing on how individual psychology and the organizational environment interact to shape these behaviors.

Outcome - The study identified four key dimensions influencing GenAI adoption: Personality-driven usage behavior, AI-driven changes to professional identity, organizational culture factors, and the organizational risks of unmanaged AI use.
- Four distinct employee archetypes were identified: 'Innovative Pioneers' who openly use and identify with AI, 'Hidden Users' who identify with AI but conceal its use for competitive advantage, 'Transparent Users' who openly use AI as a tool, and 'Critical Skeptics' who remain cautious and avoid it.
- Personality traits, particularly those from the 'Dark Triad' like narcissism, and competitive work environments significantly drive the strategic concealment of AI use.
- A company's culture is critical; open, innovative cultures foster ethical and transparent AI adoption, whereas rigid, hierarchical cultures encourage concealment and the rise of risky 'Shadow AI'.
Generative AI, Personality Traits, AI Identity, Organizational Culture, AI Adoption
Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport

Structural Estimation of Auction Data through Equilibrium Learning and Optimal Transport

Markus Ewert and Martin Bichler
This study proposes a new method for analyzing auction data to understand bidders' private valuations. It extends an existing framework by reformulating the estimation challenge as an optimal transport problem, which avoids the statistical limitations of traditional techniques. This novel approach uses a proxy equilibrium model to analytically evaluate bid distributions, leading to more accurate and robust estimations.

Problem Designing profitable auctions, such as setting an optimal reserve price, requires knowing how much bidders are truly willing to pay, but this information is hidden. Existing methods to estimate these valuations from observed bids often suffer from statistical biases and inaccuracies, especially with limited data, leading to poor auction design and lost revenue for sellers.

Outcome - The proposed optimal transport-based estimator consistently outperforms established kernel-based techniques, showing significantly lower error in estimating true bidder valuations.
- The new method is more robust, providing accurate estimates even in scenarios with high variance in bidding behavior where traditional methods fail.
- In practical tests, reserve prices set using the new method's estimates led to significant revenue gains for the auctioneer, while prices derived from older methods resulted in zero revenue.
Structural Estimation, Auctions, Equilibrium Learning, Optimal Transport, Econometrics
A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis

A Case Study on Large Vehicles Scheduling for Railway Infrastructure Maintenance: Modelling and Sensitivity Analysis

Jannes Glaubitz, Thomas Wolff, Henry Gräser, Philipp Sommerfeldt, Julian Reisch, David Rößler-von Saß, and Natalia Kliewer
This study presents an optimization-driven approach to scheduling large vehicles for preventive railway infrastructure maintenance, using real-world data from Deutsche Bahn. It employs a greedy heuristic and a Mixed Integer Programming (MIP) model to evaluate key factors influencing scheduling efficiency. The goal is to provide actionable insights for strategic decision-making and improve operational management.

Problem Railway infrastructure maintenance is a critical operational task that often causes significant disruptions, delays, and capacity restrictions for both passenger and freight services. These disruptions reduce the overall efficiency and attractiveness of the railway system. The study addresses the challenge of optimizing maintenance schedules to maximize completed work while minimizing interference with regular train operations.

Outcome - The primary bottleneck in maintenance scheduling is the limited availability and reusability of pre-defined work windows ('containers'), not the number of maintenance vehicles.
- Increasing scheduling flexibility by allowing work containers to be booked multiple times dramatically improves maintenance completion rates, from 84.7% to 98.2%.
- Simply adding more vehicles to the fleet provides only marginal improvements, as scheduling efficiency is the limiting factor.
- Increasing the operational radius for vehicles from depots and moderately extending shift lengths can further improve maintenance coverage.
- The analysis suggests that large, predefined maintenance containers are often inefficient and should be split into smaller sections to improve flexibility and resource utilization.
Railway Track Maintenance Planning, Maintenance Track Possession Problem, Operations Research, Mixed Integer Programming, Vehicle Scheduling, Sensitivity Analysis, Optimization
Boundary Resources – A Review

Boundary Resources – A Review

David Rochholz
This study conducts a systematic literature review to analyze the current state of research on 'boundary resources,' which are the tools like APIs and SDKs that connect digital platforms with third-party developers. By examining 89 publications, the paper identifies major themes and significant gaps in the academic literature. The goal is to consolidate existing knowledge and propose a clear research agenda for the future.

Problem Digital platforms rely on third-party developers to create value, but the tools (boundary resources) that enable this collaboration are not well understood. Research is fragmented and often overlooks critical business aspects, such as the financial reasons for opening a platform and how to monetize these resources. Furthermore, most studies focus on consumer apps, ignoring the unique challenges of business-to-business (B2B) platforms and the rise of AI-driven developers.

Outcome - Identifies four key gaps in current research: the financial impact of opening platforms, the overemphasis on consumer (B2C) versus business (B2B) contexts, the lack of a clear definition for what constitutes a platform, and the limited understanding of modern developers, including AI agents.
- Proposes a research agenda focused on monetization strategies, platform valuation, and the distinct dynamics of B2B ecosystems.
- Emphasizes the need to understand how the role of developers is changing with the advent of generative AI.
- Concludes that future research must create better frameworks to help businesses manage and profit from their platform ecosystems in a more strategic way.
Boundary Resource, Platform, Complementor, Research Agenda, Literature Review
You Only Lose Once: Blockchain Gambling Platforms

You Only Lose Once: Blockchain Gambling Platforms

Lorenz Baum, Arda Güler, and Björn Hanneke
This study investigates user behavior on emerging blockchain-based gambling platforms to provide insights for regulators and user protection. The researchers analyzed over 22,800 gambling rounds from YOLO, a smart contract-based platform, involving 3,306 unique users. A generalized linear mixed model was used to identify the effects of users' cognitive biases on their on-chain gambling activities.

Problem Online gambling revenues are increasing, which exacerbates societal problems and often evades regulatory oversight. The rise of decentralized, blockchain-based gambling platforms aggravates these issues by promising transparency while lacking user protection measures, making it easier to exploit users' cognitive biases and harder for authorities to enforce regulations.

Outcome - Cognitive biases like the 'anchoring effect' (repeatedly betting the same amount) and the 'gambler's fallacy' (believing a losing streak makes a win more likely) significantly increase the probability that a user will continue gambling.
- The study confirms that blockchain platforms can exploit these psychological biases, leading to sustained gambling and substantial financial losses for users, with a sample of 3,306 users losing a total of $5.1 million.
- Due to the decentralized and permissionless nature of these platforms, traditional regulatory measures like deposit limits, age verification, and self-exclusion are nearly impossible to enforce.
- The findings highlight the urgent need for new regulatory approaches and user protection mechanisms tailored to the unique challenges of decentralized gambling environments, such as on-chain monitoring for risky behavior.
gambling platform, smart contract, gambling behavior, cognitive bias, user behavior
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework
Load More Showing 162 of 229