AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & Operations AI Adoption & Implementation Platform Ecosystems & Strategy SME & Entrepreneurship Cybersecurity & Risk AI Applications & Technologies Digital Health & Well-being Digital Work & Collaboration Education & Training
Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics
International Conference on Wirtschaftsinformatik (2023)

Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics

Jeannette Stark, Thure Weimann, Felix Reinsch, Emily Hickmann, Maren Kählig, Carola Gißke, and Peggy Richter
This study reviews the psychological requirements for forming habits and analyzes how these requirements are implemented in existing mobile habit-tracking apps. Through a content analysis of 57 applications, the research identifies key design gaps and proposes a set of principles to inform the creation of more effective Digital Therapeutics (DTx) for long-term behavioral change.

Problem Noncommunicable diseases (NCDs), a leading cause of death, often require sustained lifestyle and behavioral changes. While many digital apps aim to support habit formation, they often fail to facilitate the entire process, particularly the later stages where a habit becomes automatic and reliance on technology should decrease, creating a gap in effective long-term support.

Outcome - Conventional habit apps primarily support the first two stages of habit formation: deciding on a habit and translating it into an initial behavior.
- Most apps neglect the crucial later stages of habit strengthening, where technology use should be phased out to allow the habit to become truly automatic.
- A conflict of interest was identified, as the commercial need for continuous user engagement in many apps contradicts the goal of making a user's new habit independent of the technology.
- The research proposes specific design principles for Digital Therapeutics (DTx) to better support all four stages of habit formation, offering a pathway for developing more effective tools for NCD prevention and treatment.
Behavioral Change, Digital Therapeutics, Habits, Habit Apps, Non-communicable diseases
Layering the Architecture of Digital Product Innovations: Firmware and Adapter Layers
Journal of the Association for Information Systems (2025)

Layering the Architecture of Digital Product Innovations: Firmware and Adapter Layers

Julian Lehmann, Philipp Hukal, Jan Recker, Sanja Tumbas
This study investigates how organizations integrate digital components into physical products to create layered architectures. Through a multi-year case study of a 3D printer company, it details the process of embedding firmware and creating adapter layers to connect physical hardware with higher-level software functionality.

Problem As companies increasingly transform physical products into 'smart' digital innovations, they face the complex challenge of effectively integrating digital and physical components. There is a lack of clear understanding of how to structure this integration, which can limit a product's flexibility and potential for future upgrades.

Outcome - The process of integrating digital and physical components is a bottom-up process, starting with making hardware controllable via software (a process called parametrizing).
- The study identifies two key techniques for success: 1) parametrizing physical components through firmware, and 2) arranging digital functionality through higher-level adapter layers.
- Creating 'adapter layers' is critical to bridge the gap between static physical components and flexible digital software, enabling them to communicate and work together.
- This layered approach allows companies to innovate and add new features through software updates, enhancing product capabilities without needing to redesign the physical hardware.
Digital Product Innovation, Firmware, Product Architecture, Layering, Embedding, 3D Printing, Case Study
Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms
Journal of the Association for Information Systems (2025)

Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms

Zhen Shao, Lin Zhang, Susan A. Brown, Jose Benitez
This study investigates how to build user trust in online healthcare mutual aid platforms that use blockchain technology. Drawing on institutional trust theory, the research examines how policy and technology assurances influence users' intentions and actual usage by conducting a two-part field survey with users of a real-world platform.

Problem Online healthcare mutual aid platforms, which act as a form of peer-to-peer insurance, struggle with user adoption due to widespread distrust. Frequent incidents of fraud, false claims, and misappropriation of funds have created skepticism, making it a significant challenge to facilitate user trust and ensure the sustainable growth of these platforms.

Outcome - Both strong institutional policies (policy assurance) and reliable technical features enabled by blockchain (technology assurance) significantly increase users' trust in the platform.
- Higher user trust is directly linked to a greater intention to use the online healthcare mutual aid platform.
- The intention to use the platform positively influences actual usage behaviors, such as the frequency and intensity of use.
- Trust acts as a full mediator, meaning that the platform's assurances build trust, which in turn drives user intention and behavior.
Structural Assurance, Blockchain Technology, Healthcare, Trust, Behavioral Intention, Actual Usage Behaviors
Responsible AI Design: The Authenticity, Control, Transparency Theory
Journal of the Association for Information Systems (2025)

Responsible AI Design: The Authenticity, Control, Transparency Theory

Andrea Rivera, Kaveh Abhari, Bo Xiao
This study explores how to design Artificial Intelligence (AI) responsibly from the perspective of AI designers. Using a grounded theory approach based on interviews with industry professionals, the paper develops the Authenticity, Control, Transparency (ACT) theory as a new framework for creating ethical AI.

Problem Current guidelines for responsible AI are fragmented and lack a cohesive theory to guide practice, leading to inconsistent outcomes. Existing research often focuses narrowly on specific attributes like algorithms or harm minimization, overlooking the broader design decisions that shape an AI's behavior from its inception.

Outcome - The study introduces the Authenticity, Control, and Transparency (ACT) theory as a practical framework for responsible AI design.
- It identifies three core mechanisms—authenticity, control, and transparency—that translate ethical design decisions into responsible AI behavior.
- These mechanisms are applied across three key design domains: the AI's architecture, its algorithms, and its functional affordances (capabilities offered to users).
- The theory shifts the focus from merely minimizing harm to also maximizing the benefits of AI, providing a more balanced approach to ethical design.
Responsible AI, AI Ethics, AI Design, Authenticity, Transparency, Control, Algorithmic Accountability
Continuous Contracting in Software Outsourcing: Towards A Configurational Theory
Journal of the Association for Information Systems (2025)

Continuous Contracting in Software Outsourcing: Towards A Configurational Theory

Thomas Huber, Kalle Lyytinen
This study investigates how governance configurations are formed, evolve, and influence outcomes in software outsourcing projects that use continuous contracting. Through a longitudinal, multimethod analysis of 33 governance episodes across three projects, the research identifies how different combinations of contract design and project control achieve alignment and flexibility. The methodology combines thematic analysis with crisp-set qualitative comparative analysis (csQCA) to develop a new theory.

Problem Contemporary software outsourcing increasingly relies on continuous contracting, where an initial umbrella agreement is followed by periodic contracts. However, there is a significant gap in understanding how managers should combine contract design and project controls to balance the competing needs for project alignment and operational flexibility, and how these choices evolve to impact overall project performance.

Outcome - Identified eight distinct governance configurations, each consistently linked to specific outcomes of alignment and flexibility.
- Found that project outcomes depend on how governance elements interact within a configuration, either by substituting for each other or compensating for each other's limitations.
- Showed that as trust and knowledge accumulate, managers' governance strategies evolve from simple configurations (achieving either alignment or flexibility) to more sophisticated ones that achieve both simultaneously.
- Concluded that by deliberately evolving governance configurations, managers can better steer projects and enhance overall performance.
Software Outsourcing Governance, Contract Design, Project Control, Continuous Contracting, Alignment, Flexibility, Governance Configurations
What Is Augmented? A Metanarrative Review of AI-Based Augmentation
Journal of the Association for Information Systems (2025)

What Is Augmented? A Metanarrative Review of AI-Based Augmentation

Inès Baer, Lauren Waardenburg, Marleen Huysman
This paper conducts a comprehensive literature review across five research disciplines to clarify the concept of AI-based augmentation. Using a metanarrative review method, the study identifies and analyzes four distinct targets of what AI augments: the body, cognition, work, and performance. Based on this framework, the authors propose an agenda for future research in the field of Information Systems.

Problem In both academic and public discussions, Artificial Intelligence is often described as a tool for 'augmentation' that helps humans rather than replacing them. However, this popular term lacks a clear, agreed-upon definition, and there is little discussion about what specific aspects of human activity are the targets of this augmentation. This research addresses the fundamental question: 'What is augmented by AI?'

Outcome - The study identified four distinct metanarratives, or targets, of AI-based augmentation: the body (enhancing physical and sensory functions), cognition (improving decision-making and knowledge), work (creating new employment opportunities and improving work practices), and performance (increasing productivity and innovation).
- Each augmentation target is underpinned by a unique human-AI configuration, ranging from human-AI symbiosis for body augmentation to mutual learning loops for cognitive augmentation.
- The paper reveals tensions and counternarratives for each target, showing that augmentation is not purely positive; for example, it can lead to over-dependence on AI, deskilling, or a loss of human agency.
- The four augmentation targets are interconnected, creating potential conflicts (e.g., prioritizing performance over meaningful work) or dependencies (e.g., cognitive augmentation relies on augmenting bodily senses).
Augmentation, Artificial Intelligence, Human-AI Interaction, Metanarrative Review, Cognitive Augmentation, Work Augmentation, Organizational Performance
What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace
Journal of the Association for Information Systems (2025)

What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace

Sebastian Schuetz, Heiko Gewald, Allen Johnston, Jason Bennett Thatcher
This study investigates the work-related goals that motivate employees' information systems security behaviors. It employs a mixed-methods approach, first using qualitative interviews to identify key employee goals and then using a large-scale quantitative survey to evaluate their importance in predicting security actions.

Problem Prior research on information security behavior often relies on general theories from criminology or public health, which do not fully capture the specific goals employees have in a workplace context. This creates a gap in understanding the primary motivations for why employees choose to follow or ignore security protocols during their daily work.

Outcome - Employees' security behaviors are primarily driven by the goals of achieving good work performance and avoiding blame for security incidents.
- Career advancement acts as a higher-order goal, giving purpose to security behaviors by motivating the pursuit of subgoals like work performance and blame avoidance.
- The belief that security behaviors help meet a supervisor's performance expectations (work performance alignment) is the single most important predictor of those behaviors.
- Organizational citizenship (the desire to be a 'good employee') was not a significant predictor of security behavior when other goals were considered.
- A strong security culture encourages secure behaviors by strengthening the link between these behaviors and the goals of work performance and blame avoidance.
Security Behaviors, Goal Systems Theory (GST), Work Performance, Blame Avoidance, Organizational Citizenship, Career Advancement
Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures
Journal of the Association for Information Systems (2025)

Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures

Egil Øvrelid, Bendik Bygstad, Ole Hanseth
This study examines how public and professional discussions, known as discourses, shape major changes in large-scale digital systems like national e-health infrastructures. Using an 18-year in-depth case study of Norway's e-health development, the research analyzes how high-level strategic trends interact with on-the-ground practical challenges to drive fundamental shifts in technology programs.

Problem Implementing complex digital infrastructures like national e-health systems is notoriously difficult, and leaders often struggle to understand why some initiatives succeed while others fail. Previous research focused heavily on the role of powerful individuals or groups, paying less attention to the underlying, systemic influence of how different conversations about technology and strategy converge over time. This gap makes it difficult for policymakers to make sensible, long-term decisions and navigate the evolution of these critical systems.

Outcome - Major shifts in large digital infrastructure programs occur when high-level strategic discussions (macrodiscourses) and practical, operational-level discussions (microdiscourses) align and converge.
- This convergence happens through three distinct processes: 'connection' (a shared recognition of a problem), 'matching' (evaluating potential solutions that fit both high-level goals and practical needs), and 'merging' (making a decision and reconciling the different perspectives).
- The result of this convergence is a new "discursive formation"—a powerful, shared understanding that aligns stakeholders, technology, and strategy, effectively launching a new program and direction.
- Policymakers and managers can use this framework to better analyze the alignment between broad technological trends and their organization's specific, internal needs, leading to more informed and realistic strategic planning.
Discursive Formations, Discourse Convergence, Large-Scale Digital Infrastructures, E-Health Programs, Program Shifts, Sociotechnical Systems, IT Strategy
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Journal of the Association for Information Systems (2025)

Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare

Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.

Problem With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.

Outcome - The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary.
- The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities.
- New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context.
- The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective
Journal of the Association for Information Systems (2025)

Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective

Adrian Yeow, Wee-Kiat Lim, Samer Faraj
This paper investigates the complexities of developing large-scale digital infrastructure through a case study of an electronic medical record (EMR) system implementation in a U.S. hospital. It introduces and analyzes the concept of 'digital infrastructuring work'—the combination of technical, social, and symbolic actions that organizational actors perform. The study provides a framework for understanding the tensions and actions that shape the outcomes of such projects.

Problem Implementing new digital infrastructures in large organizations is challenging because it often disrupts established routines and power structures, leading to resistance and project stalls. Existing research frequently overlooks how the combination of technical tasks, social negotiations, and symbolic arguments by different groups influences the success or failure of these projects. This study addresses this gap by providing a more holistic view of the work involved in digital infrastructure development from an institutional perspective.

Outcome - The study introduces 'digital infrastructuring work' to explain how actors shape digital infrastructure development, categorizing it into three forms: digital object work (technical tasks), DI relational work (social interactions), and DI symbolic work (discursive actions).
- It finds that project stakeholders strategically combine these forms of work to either support change or maintain existing systems, highlighting the contested nature of infrastructure projects.
- The success or failure of a digital infrastructure project is shown to depend on how effectively different groups navigate the tensions between change and stability by skillfully blending technical, relational, and symbolic efforts.
- The paper demonstrates that technical work itself carries institutional significance and is not merely a neutral backdrop for social interactions, but a key site of contestation.
Digital Infrastructure Development, Institutional Work, IT Infrastructure Management, Healthcare Information Systems, Digital Objects, Case Study
Understanding the Ethics of Generative AI: Established and New Ethical Principles
Communications of the Association for Information Systems (2025)

Understanding the Ethics of Generative AI: Established and New Ethical Principles

Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.

Problem The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.

Outcome - Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI.
- Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design.
- Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems.
- The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects
Communications of the Association for Information Systems (2025)

Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects

Karin Väyrynen, Sari Laari-Salmela, Netta Iivari, Arto Lanamäki, Marianne Kinnula
This study explores how an information technology (IT) artefact evolves into a 'policy object' during the policymaking process, using a 4.5-year longitudinal case study of the Finnish Taximeter Law. The research proposes a conceptual framework that identifies three forms of the artefact as it moves through the policy cycle: a mental construct, a policy text, and a material IT artefact. This framework helps to understand the dynamics and challenges of regulating technology.

Problem While policymaking related to information technology is increasingly significant, the challenges stemming from the complex, multifaceted nature of IT are poorly understood. There is a specific gap in understanding how real-world IT artefacts are translated into abstract policy texts and how those texts are subsequently reinterpreted back into actionable technologies. This 'translation' process often leads to ambiguity and unintended consequences during implementation.

Outcome - Proposes a novel conceptual framework for understanding the evolution of an IT artefact as a policy object during a public policy cycle.
- Identifies three distinct forms the IT artefact takes: 1) a mental construct in the minds of policymakers and stakeholders, 2) a policy text such as a law, and 3) a material IT artefact as a real-world technology that aligns with the policy.
- Highlights the significant challenges in translating complex real-world technologies into abstract legal text and back again, which can create ambiguity and implementation difficulties.
- Distinguishes between IT artefacts at the policy level and IT artefacts as real-world technologies, showing how they evolve on separate but interconnected tracks.
IT Artefact, IT Regulation, Law, Policy Object, Policy Cycle, Public Policymaking, European Al Act
The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents
Communications of the Association for Information Systems (2025)

The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents

Soojin Roh, Shubin Yu
This paper investigates if, when, and how organizations can strategically use emojis in online communications when responding to information system (IS) incidents. Through three experimental studies conducted with Chinese and U.S. participants, the research examines how cultural context, the source of the message (CEO vs. company account), and incident type influence public perception.

Problem As companies increasingly use emojis in professional communications, there is a risk of missteps, especially in crisis situations. A lack of understanding of how emojis shape public perception across different cultures can lead to reputational harm, and existing research lacks empirical evidence on their strategic and cross-cultural application in responding to IS incidents.

Outcome - For Chinese audiences, using emojis in IS incident responses is generally positive, as it reduces psychological distance, alleviates anger, and increases perceptions of warmth and competence.
- The positive effect of emojis in China is stronger when used by an official company account rather than a CEO, and when the company is responsible for the incident.
- In contrast, U.S. audiences tend to evaluate the use of emojis negatively in incident responses.
- The negative perception among U.S. audiences is particularly strong when a CEO uses an emoji to respond to an internally-caused incident, leading to increased anger and perceptions of incompetence.
Emoji, Information System Incident, Social Media, Psychological Distance, Warmth, Competence
Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective
Communications of the Association for Information Systems (2024)

Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective

Prakash Dhavamani, Barney Tan, Daniel Gozman, Leben Johnson
This study investigates how a financial technology (Fintech) ecosystem was successfully established in a resource-constrained environment, using the Vizag Fintech Valley in India as a case study. The research examines the specific processes of gathering resources, building capabilities, and creating market value under significant budget limitations. It proposes a practical framework to guide the development of similar 'frugal' innovation hubs in other developing regions.

Problem There is limited research on how to launch and develop a Fintech ecosystem, especially in resource-scarce developing countries where the potential benefits like financial inclusion are greatest. Most existing studies focus on developed nations, and their findings are not easily transferable to environments with tight budgets, a lack of specialized talent, and less mature infrastructure. This knowledge gap makes it difficult for policymakers and entrepreneurs to create successful Fintech hubs in these regions.

Outcome - The research introduces a practical framework for building Fintech ecosystems in resource-scarce settings, called the Frugal Fintech Ecosystem Development (FFED) framework.
- The framework identifies three core stages: Structuring (gathering and prioritizing available resources), Bundling (combining resources to build capabilities), and Leveraging (using those capabilities to seize market opportunities).
- It highlights five key sub-processes for success in a frugal context: bricolaging (creatively using resources at hand), prioritizing, emulating (learning from established ecosystems), extrapolating, and sandboxing (safe, small-scale experimentation).
- The study shows that by orchestrating resources effectively, even frugal ecosystems can achieve outcomes comparable to those in well-funded regions, a concept termed 'equifinality'.
- The findings offer an evidence-based guide for policymakers to design regulations and support models that foster sustainable Fintech growth in developing economies.
Fintech Ecosystem, India, Frugal Innovation, Resource Orchestration, Case Study
Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees
Communications of the Association for Information Systems (2024)

Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees

Ashneet Kaur, Sudhanshu Maheshwari, Indranil Bose, Simarjeet Singh
This study conducts a systematic literature review to comprehensively explore the implications of Artificial Intelligence (AI) on employee privacy. It utilizes the privacy calculus framework to analyze the trade-offs organizations and employees face when integrating AI technologies in the workplace. The research evaluates how different types of AI technologies compromise or safeguard privacy and discusses their varying impacts.

Problem The rapid and pervasive adoption of AI in the workplace has enhanced efficiency but also raised significant concerns regarding employee privacy. There is a research gap in holistically understanding the broad implications of advancing AI technologies on employee privacy, as previous studies often focus on narrow applications without a comprehensive theoretical framework.

Outcome - The integration of AI in the workplace presents a trade-off, offering benefits like objective performance evaluation while posing significant risks such as over-surveillance and erosion of trust.
- The study categorizes AI into four advancing types (descriptive, predictive, prescriptive, and autonomous), each progressively increasing the complexity of privacy challenges and altering the employee privacy calculus.
- As AI algorithms become more advanced and opaque, it becomes more difficult for employees to understand how their data is used, leading to feelings of powerlessness and potential resistance.
- The paper identifies a significant lack of empirical research specifically on AI's impact on employee privacy, as opposed to the more widely studied area of consumer privacy.
- To mitigate privacy risks, the study recommends practical strategies for organizations, including transparent communication about data practices, involving employees in AI system design, and implementing strong ethical AI frameworks.
Artificial Intelligence, Employee Privacy, Privacy Calculus, Systematic Review, Workplace Surveillance, AI Ethics
IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer
Communications of the Association for Information Systems (2025)

IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer

Abhinav Shekhar, Rakesh Gupta, Sujeet Kumar Sharma
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.

Problem Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.

Outcome - Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias.
- Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process.
- Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation.
- Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective
Communications of the Association for Information Systems (2025)

Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective

David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.

Problem Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.

Outcome - A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation.
- Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals).
- AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI.
- The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability
Journal of the Association for Information Systems (2026)

Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability

Karen Stendal, Maung K. Sein, Devinder Thapa
This study explores how individuals with lifelong disabilities (PWLD) use virtual worlds, specifically Second Life, to achieve social inclusion. Using a qualitative approach with in-depth interviews and participant observation, the researchers analyzed how PWLD experience the platform's features. The goal was to develop a model explaining the process through which technology facilitates greater community participation and interpersonal connection for this marginalized group.

Problem People with lifelong disabilities often face significant social isolation and exclusion due to physical, mental, or sensory impairments that hinder their full participation in society. This lack of social connection can negatively impact their psychological and emotional well-being. This research addresses the gap in understanding the specific mechanisms by which technology, like virtual worlds, can help this population move from isolation to inclusion.

Outcome - Virtual worlds offer five key 'affordances' (action possibilities) that empower people with lifelong disabilities (PWLD).
- Three 'functional' affordances were identified: Communicability (interacting without barriers like hearing loss), Mobility (moving freely without physical limitations), and Personalizability (controlling one's digital appearance and whether to disclose a disability).
- These functional capabilities enable two 'social' affordances: Engageability (the ability to join in social activities) and Self-Actualizability (the ability to realize one's potential and help others).
- The study proposes an 'Affordance-Based Pathway Model' which shows how using these features helps PWLD build interpersonal relationships and participate in communities, leading to social inclusion.
Social Inclusion, Virtual Worlds (VW), People With Lifelong Disability (PWLD), Affordances, Second Life, Assistive Technology, Qualitative Study
Load More Showing 18 of 57