AIS Logo
Insights

EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH

Abdul Sesay, Elena Karahanna, and Marie-Claude Boudreau
This study investigates how the effects of new technology, specifically body-worn cameras (BWCs), unfold within organizations over time. Using a multi-site case study of three U.S. police departments, the research develops a process model to explain how the consequences of IT implementation emerge. The study identifies three key phases in this process: individuation (selecting the technology and related policies), composition (combining the technology with users), and actualization (using the technology in real-world interactions).

Problem When organizations implement new technology, the results are often unpredictable, with outcomes varying widely between different settings. Existing research has not fully explained why a technology can be successful in one organization but fail in another. This study addresses the gap in understanding how the consequences of a new technology, like police body-worn cameras, actually develop and evolve into established organizational practices.

Outcome - The process through which technology creates new behaviors and practices is complex and non-linear, occurring in three distinct phases (individuation, composition, and actualization).
- Successful implementation is not guaranteed; it depends on the careful alignment of the technology itself (material components) with policies, training, and user adoption (expressive components) at each stage.
- The study found that of the three police departments, only one successfully implemented body cameras because it carefully selected high-quality equipment, developed specific policies for its use, and ensured officers were trained and held accountable.
- The other two departments experienced failure or delays due to poor quality equipment, generic policies, and inconsistent use, which prevented new, positive practices from taking hold.
- The model shows that outcomes emerge over time and may require continuous adjustments, demonstrating that success is an ongoing process, not a one-time event.
IT implementation, Assemblage theory, body-worn camera, organizational change, police technology, process model

SUPPORTING COMMUNITY FIRST RESPONDERS IN AGING IN PLACE: AN ACTION DESIGN FOR A COMMUNITY-BASED SMART ACTIVITY MONITORING SYSTEM

Carmen Leong, Carol Hsu, Nadee Goonawardene, Hwee-Pink Tan
This study details the development of a smart activity monitoring system designed to help elderly individuals live independently at home. Using a three-year action design research approach, it deployed a sensor-based system in a community setting to understand how to best support community first responders—such as neighbors and volunteers—who lack professional healthcare training.

Problem As the global population ages, more elderly individuals wish to remain in their own homes, but this raises safety concerns like falls or medical emergencies going unnoticed. This study addresses the specific challenge of designing monitoring systems that provide remote, non-professional first responders with the right information (situational awareness) to accurately assess an emergency alert and respond effectively.

Outcome - Technology adaptation alone is insufficient; the system design must also encourage the elderly person to adapt their behavior, such as carrying a beacon when leaving home, to ensure data accuracy.
- Instead of relying on simple automated alerts, the system should provide responders with contextual information, like usual sleep times or last known activity, to support human-based assessment and reduce false alarms.
- To support teams of responders, the system must integrate communication channels, allowing all actions and updates related to an alert to be logged in a single, closed-loop thread for better coordination.
- Long-term activity data can be used for proactive care, helping identify subtle changes in behavior (e.g., deteriorating mobility) that may signal future health risks before an acute emergency occurs.
Activity monitoring systems, community-based model, elderly care, situational awareness, IoT, sensor-based monitoring systems, action design research

What it takes to control Al by design: human learning

Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.

Problem Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.

Outcome - The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive.
- Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'.
- A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode.
- This approach results in high-performance AI systems that are unbiased and aligned with human values.
- The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Human-AI system, Control, Reciprocal learning, Feedback, Oversight

Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity

Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.

Problem Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.

Outcome - Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk.
- Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive.
- Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers.
- Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility.
- The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches

Design Knowledge for Virtual Learning Companions from a Value-centered Perspective

Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.

Problem Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.

Outcome - The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features.
- The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer.
- Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment.
- Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value

How Audi Scales Artificial Intelligence in Manufacturing

André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
This paper presents an in-depth case study of how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality control in its manufacturing plants. The study outlines Audi's four-year journey to develop and deploy an AI system that automatically detects cracks in sheet metal parts. Based on this real-world example, the paper provides actionable recommendations for business leaders seeking to implement AI at scale.

Problem While artificial intelligence offers significant potential to create business value, many companies struggle to move AI projects beyond the pilot or proof-of-concept stage. This failure to scale AI innovations, particularly in complex industrial environments like manufacturing, represents a major barrier to realizing a return on investment. This study addresses the gap between AI's potential and the practical challenges of widespread, value-driven implementation.

Outcome - Audi successfully developed and scaled an AI-based visual inspection system across multiple press shops, significantly improving quality control for sheet metal parts.
- The success was built on a structured four-stage journey: exploring the initial idea, developing a scalable solution, implementing it within the existing IT infrastructure, and finally scaling it across multiple sites.
- A key strategy was to design the system for scalability from the outset by creating a single, universal AI model that could be deployed in various contexts, leveraging data from all locations to continuously improve performance.
- The study offers a roadmap for executives, recommending that AI scaling be treated as a strategic priority, that interdisciplinary collaboration is fostered, and that AI operations are streamlined through automation and robust governance.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Control

REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION

Stefan Seidel, Christoph J. Frick, Jan vom Brocke
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.

Problem Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.

Outcome - Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time.
- This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation.
- Elaboration involves specifying details and requirements to provide legal certainty and protect users.
- Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.
Technology regulation, prospective sensemaking, sensemaking, institutional construction, emerging technology, blockchain, token economy