Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
About
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.
Problem
Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.
Outcome
- The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features. - The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer. - Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment. - Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
How Audi Scales Artificial Intelligence in Manufacturing
André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
About
This paper presents an in-depth case study of how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality control in its manufacturing plants. The study outlines Audi's four-year journey to develop and deploy an AI system that automatically detects cracks in sheet metal parts. Based on this real-world example, the paper provides actionable recommendations for business leaders seeking to implement AI at scale.
Problem
While artificial intelligence offers significant potential to create business value, many companies struggle to move AI projects beyond the pilot or proof-of-concept stage. This failure to scale AI innovations, particularly in complex industrial environments like manufacturing, represents a major barrier to realizing a return on investment. This study addresses the gap between AI's potential and the practical challenges of widespread, value-driven implementation.
Outcome
- Audi successfully developed and scaled an AI-based visual inspection system across multiple press shops, significantly improving quality control for sheet metal parts. - The success was built on a structured four-stage journey: exploring the initial idea, developing a scalable solution, implementing it within the existing IT infrastructure, and finally scaling it across multiple sites. - A key strategy was to design the system for scalability from the outset by creating a single, universal AI model that could be deployed in various contexts, leveraging data from all locations to continuously improve performance. - The study offers a roadmap for executives, recommending that AI scaling be treated as a strategic priority, that interdisciplinary collaboration is fostered, and that AI operations are streamlined through automation and robust governance.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Control
REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION
Stefan Seidel, Christoph J. Frick, Jan vom Brocke
About
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.
Problem
Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.
Outcome
- Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time. - This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation. - Elaboration involves specifying details and requirements to provide legal certainty and protect users. - Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.