How Audi Scales Artificial Intelligence in Manufacturing
André Sagodi, Benjamin van Giffen, Johannes Schniertshauer, Klemens Niehues, Jan vom Brocke
This paper presents a case study on how the automotive manufacturer Audi successfully scaled an artificial intelligence (AI) solution for quality inspection in its manufacturing press shops. It analyzes Audi's four-year journey, from initial exploration to multi-site deployment, to identify key strategies and challenges. The study provides actionable recommendations for senior leaders aiming to capture business value by scaling AI innovations.
Problem
Many organizations struggle to move their AI initiatives from the pilot phase to full-scale operational use, failing to realize the technology's full economic potential. This is a particular challenge in manufacturing, where integrating AI with legacy systems and processes presents significant barriers. This study addresses how a company can overcome these challenges to successfully scale an AI solution and unlock long-term business value.
Outcome
- Audi successfully scaled an AI-based system to automate the detection of cracks in sheet metal parts, a crucial quality control step in its press shops. - The success was driven by a strategic four-stage approach: Exploring, Developing, Implementing, and Scaling, with a focus on designing for scalability from the outset. - Key success factors included creating a single, universal AI model for multiple deployments, leveraging data from various sources to improve the model, and integrating the solution into the broader Volkswagen Group's digital production platform to create synergies. - The study highlights the importance of decoupling value from cost, which Audi achieved by automating monitoring and deployment pipelines, thereby scaling operations without proportionally increasing expenses. - Recommendations for other businesses include making AI scaling a strategic priority, fostering collaboration between AI experts and domain specialists, and streamlining operations through automation and robust governance.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a challenge that trips up so many companies: taking artificial intelligence from a cool experiment to a large-scale business solution. Host: We're looking at a fascinating new study from MIS Quarterly Executive titled, "How Audi Scales Artificial Intelligence in Manufacturing." It's a deep dive into the carmaker's four-year journey to deploy an AI solution across multiple sites, offering some brilliant, actionable advice for senior leaders. Host: And to guide us through it, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The study summary mentions that many organizations struggle to get their AI projects out of the pilot phase. Can you paint a picture of this problem for us? Expert: Absolutely. It's often called "pilot purgatory." Companies build a successful AI proof-of-concept, but it never translates into real, widespread operational use. The study highlights that in 2019, only about 10% of automotive companies had implemented AI at scale. The gap between a pilot and an enterprise-grade system is massive. Host: And what was the specific problem Audi was trying to solve? Expert: They were focused on quality control in their press shops, where they stamp sheet metal into car parts like doors and hoods. A single press shop can produce over 3 million parts a year, and tiny, hard-to-see cracks can form in about one in every thousand parts. Finding these manually is slow and difficult, but missing them causes huge costs down the line. Host: So a perfect, high-stakes problem for AI to tackle. How did the researchers go about studying Audi's approach? Expert: They conducted an in-depth case study, tracking Audi's entire journey over four years. They analyzed how the company moved through four distinct stages: Exploring the initial idea, Developing the technology, Implementing it at the first site, and finally, Scaling it across the wider organization. Host: So what were the key findings? How did Audi escape that "pilot purgatory" you mentioned? Expert: There were a few critical factors. First, they designed for scale from the very beginning. It wasn't just about solving the problem for one press line; the goal was always a solution that could be rolled out to multiple factories. Host: That foresight seems crucial. What else? Expert: Second, and this is a key technical insight, they decided to build a single, universal AI model. Instead of creating a separate model for each press line or each car part, they built one core model and fed it image data from every deployment. This created a powerful network effect—the more data the model saw, the more accurate it became for everyone. Host: So the system gets smarter and more valuable as it scales. That's brilliant. Expert: Exactly. And third, they didn't build this in a vacuum. They integrated the AI solution into the larger Volkswagen Group's Digital Production Platform. This meant they could leverage existing infrastructure and align with the parent company's broader digital strategy, creating huge synergies. Host: It sounds like this was about much more than just a clever algorithm. So, Alex, this is the most important question for our listeners: Why does this matter for my business, even if I'm not in manufacturing? Expert: The lessons here are universal. The study boils them down into three key recommendations. First, make AI scaling a strategic priority. Don’t just fund isolated experiments. Focus on big, scalable business problems where AI can deliver substantial, long-term value. Host: Okay, be strategic. What's the second takeaway? Expert: Foster deep collaboration. This wasn’t just an IT project. Audi succeeded because their AI engineers worked hand-in-hand with the press shop experts on the factory floor. As one project leader put it, you have to involve the domain experts from day one to understand their pain points and create a shared sense of ownership. Host: So it's about people, not just technology. And the final lesson? Expert: Streamline operations through automation. Audi’s biggest win was what the study calls "decoupling value from cost." As they rolled the solution out to more sites, the value grew exponentially, but the costs stayed flat. They achieved this by automating the deployment and monitoring pipelines, so they didn't need to hire more engineers for each new factory. Host: That is the holy grail of scaling any technology. Alex, this has been incredibly insightful. Let's do a quick recap. Host: Many businesses get stuck in AI pilot mode. The case of Audi shows a way forward by following a strategic, four-stage approach. The key lessons for any business are to make scaling AI a core strategic goal, build cross-functional teams that pair tech experts with business experts, and automate your operations to ensure that value grows much faster than costs. Host: Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
Artificial Intelligence, AI Scaling, Manufacturing, Automotive Industry, Case Study, Digital Transformation, Quality Inspection
Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation
Dörte Schulte-Derne, Ulrich Gnewuch
This study investigates how abstract AI ethics principles can be translated into concrete actions during technology implementation. Through a longitudinal case study at a German energy service provider, the authors observed the large-scale rollout of Robotic Process Automation (RPA) over 30 months. The research provides actionable recommendations for leaders to navigate the ethical challenges and employee concerns that arise from AI-driven automation.
Problem
Organizations implementing AI to automate processes often face uncertainty, fear, and resistance from employees. While high-level AI ethics principles exist to provide guidance, business leaders struggle to apply these abstract concepts in practice. This creates a significant gap between knowing *what* ethical goals to aim for and knowing *how* to achieve them during a real-world technology deployment.
Outcome
- Define clear roles for implementing and supervising AI systems, and ensure senior leaders accept overall responsibility for any negative consequences. - Strive for a fair distribution of AI's benefits and costs among all employees, addressing tensions in a diverse workforce. - Increase transparency by making the AI's work visible (e.g., allowing employees to observe a bot at a dedicated workstation) to turn fear into curiosity. - Enable open communication among trusted peers, creating a 'safe space' for employees to discuss concerns without feeling judged. - Help employees cope with fears by involving them in the implementation process and avoiding the overwhelming removal of all routine tasks at once. - Involve employee representation bodies and data protection officers from the beginning of a new AI initiative to proactively address privacy and labor concerns.
Host: Welcome to A.I.S. Insights, the podcast where we connect big ideas with business practice. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a fascinating study from the MIS Quarterly Executive titled, "Translating AI Ethics Principles into Practice to Support Robotic Process Automation Implementation".
Host: It explores how abstract ethical ideas about AI can be turned into concrete actions when a company rolls out new technology. It follows a German energy provider over 30 months as they implemented large-scale automation, providing a real-world roadmap for leaders.
Host: With me is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Great to be here, Anna.
Host: Alex, let's start with the big picture. Many business leaders listening have heard about AI ethics, but the study suggests there's a major disconnect. What's the core problem they identified?
Expert: The problem is a classic gap between knowing *what* to do and knowing *how* to do it. Companies have access to high-level principles like fairness, transparency, and responsibility. But when it's time to automate a department's workflow, managers are often left wondering, "What does 'fairness' actually look like on a Tuesday morning for my team?"
Expert: This uncertainty creates fear and resistance among employees. They worry about their jobs, their routines get disrupted, and they often see AI as a threat. The study looked at a company, called ESP, that was facing this exact dilemma.
Host: So how did the researchers get inside this problem to understand it?
Expert: They used a longitudinal case study approach. For two and a half years, they were deeply embedded in the company. They conducted interviews, surveys, and on-site observations with everyone involved—from the back-office employees whose tasks were being automated, to the project managers, and even senior leaders and the employee works council.
Host: That deep-dive approach must have surfaced some powerful findings. What were the key takeaways?
Expert: Absolutely. The first was about responsibility. It can't be an abstract concept. At ESP, when the IT helpdesk was asked to create a user account for a bot, they initially refused, asking who would be personally responsible if it made a mistake.
Host: That's a very practical roadblock. How did the company solve it?
Expert: They had to define clear roles, creating a "bot supervisor" who was accountable for the bot's daily operations. But more importantly, they established that senior leadership, not just the tech team, had to accept ultimate responsibility for any negative outcomes.
Host: That makes sense. The study also mentions transparency. How do you make something like a software bot, which is essentially invisible, transparent to a nervous workforce?
Expert: This is one of my favorite findings. ESP set up a dedicated workstation in the middle of the office where anyone could walk by and watch the bot perform its tasks on screen. To prevent people from accidentally turning it off, they put a giant teddy bear in the chair, which they named "Robbie".
Host: A teddy bear?
Expert: Exactly. It was a simple, humanizing touch. It made the technology feel less like a mysterious, threatening force and more like a tool. It literally turned employee fear into curiosity.
Host: So it's about demystifying the technology. What about helping employees cope with the changes to their actual jobs?
Expert: The key was gradual involvement and open communication. Instead of top-down corporate announcements, they found that peer-to-peer conversations were far more effective. They created safe spaces where employees could talk to trusted colleagues who had already worked with the bots, ask honest questions, and voice their concerns without being judged.
Host: It sounds like the human element was central to this technology rollout. Alex, let’s get to the bottom line. For the business leaders listening, why does all of this matter? What are the key takeaways for them?
Expert: I think there are three critical takeaways. First, AI ethics is not a theoretical exercise; it's a core part of project risk management. Ignoring employee concerns doesn't make them go away—it just leads to resistance and potential project failure.
Expert: Second, make the invisible visible. Whether it's a teddy bear on a chair or a live dashboard, find creative ways to show employees what the AI is actually doing. A little transparency goes a long way in building trust.
Expert: And finally, involve your stakeholders from day one. That means bringing your employee representatives, your data protection officers, and your legal teams into the conversation early. In the study, the data protection officer stopped a "task mining" initiative due to privacy concerns, saving the company time and resources on a project that was a non-starter.
Host: So, it's about being proactive with responsibility, transparency, and communication.
Expert: Precisely. It’s about treating the implementation not just as a technical challenge, but as a human one.
Host: A fantastic summary of a very practical study. The message is clear: to succeed with AI automation, you have to translate ethical principles into thoughtful, tangible actions that build trust with your people.
Host: Alex Ian Sutherland, thank you for breaking that down for us.
Expert: My pleasure, Anna.
Host: And thanks to all of you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable lessons from the intersection of business and technology.
AI ethics, Robotic Process Automation (RPA), change management, technology implementation, case study, employee resistance, ethical guidelines
Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy
Björn Binzer, Edona Elshan, Daniel Fürstenau, Till J. Winkler
This study analyzes the low-code/no-code adoption journeys of 24 different companies to understand the challenges and best practices of citizen development. Drawing on these insights, the paper proposes a seven-step strategic framework designed to guide organizations in effectively implementing and managing these powerful tools. The framework helps structure critical design choices to empower employees with little or no IT background to create digital solutions.
Problem
There is a significant gap between the high demand for digital solutions and the limited availability of professional software developers, which constrains business innovation and problem-solving. While low-code/no-code platforms enable non-technical employees (citizen developers) to build applications, organizations often lack a coherent strategy for their adoption. This leads to inefficiencies, security risks, compliance issues, and wasted investments.
Outcome
- The study introduces a seven-step framework for creating a citizen development strategy: Coordinate Architecture, Launch a Development Hub, Establish Rules, Form the Workforce, Orchestrate Liaison Actions, Track Successes, and Iterate the Strategy. - Successful implementation requires a balance between centralized governance and individual developer autonomy, using 'guardrails' rather than rigid restrictions. - Key activities for scaling the strategy include the '5E Cycle': Evangelize, Enable, Educate, Encourage, and Embed citizen development within the organization's culture. - Recommendations include automating governance tasks, promoting business-led development initiatives, and encouraging the use of these tools by IT professionals to foster a collaborative relationship between business and IT units.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating new study titled "Establishing a Low-Code/No-Code-Enabled Citizen Development Strategy". Host: It explores how companies can strategically empower their own employees—even those with no IT background—to create digital solutions using low-code and no-code tools. Joining me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let’s start with the big picture. Why is a study like this so necessary right now? What’s the core problem businesses are facing? Expert: The problem is a classic case of supply and demand. The demand for digital solutions, for workflow automations, for new apps, is skyrocketing. But the supply of professional software developers is extremely limited and expensive. This creates a huge bottleneck that slows down innovation. Host: And companies are turning to low-code platforms as a solution? Expert: Exactly. They hope to turn regular employees into “citizen developers.” The issue is, most companies just buy the software and hope for the best, a sort of "build it and they will come" approach. Expert: But without a real strategy, this can lead to chaos. We're talking security risks, compliance issues, duplicated efforts, and ultimately, wasted money. It's like giving everyone power tools without any blueprints or safety training. Host: That’s a powerful analogy. So how did the researchers in this study figure out what the right approach should be? Expert: They went straight to the source. They conducted in-depth interviews with leaders, managers, and citizen developers at 24 different companies that were already on this journey. They analyzed their successes, their failures, and the best practices that emerged. Host: A look inside the real-world lab. What were some of the key findings that came out of that? Expert: The study's main outcome is a seven-step strategic framework. It covers everything from coordinating the technology architecture to launching a central support hub and tracking successes. Host: Can you give us an example? Expert: One of the most critical findings was the need for balance between control and freedom. The study found that rigid, restrictive rules don't work. Instead, successful companies create ‘guardrails.’ Expert: One manager used a great analogy, saying, "if the guardrails are only 50 centimeters apart, I can only ride through with a bicycle, not a truck. Ultimately, we want to achieve that at least cars can drive through." It’s about enabling people safely, not restricting them. Host: I love that. So it's not just about rules, but about creating the right environment. Expert: Precisely. The study also identified what it calls the ‘5E Cycle’: Evangelize, Enable, Educate, Encourage, and Embed. This is a process for making citizen development part of the company’s DNA, to build a culture where people are excited and empowered to innovate. Host: This is where it gets really practical. Let's talk about why this matters for a business leader. What are the key takeaways they can act on? Expert: The first big takeaway is to promote business-led citizen development. This shouldn't be just another IT project. The study shows that the most successful initiatives are driven by the business units themselves, with 'digital leads' or champions who understand their department's specific needs. Host: So, ownership moves from the IT department to the business itself. What else? Expert: The second is to automate governance wherever possible. Instead of manual checks for every new app, companies can use automated tools—often built with low-code itself—to check for security issues or compliance. This frees up IT to focus on bigger problems and empowers citizen developers to move faster. Host: And the final key takeaway? Expert: It’s about fostering a new, symbiotic relationship between business and IT. For decades, IT has often been seen as the department of "no." This study shows how citizen development can be a bridge. One leader admitted that building trust was their biggest hurdle, but now IT is seen as a valuable partner that enables transformation. Host: It sounds like this is about much more than just technology; it’s a fundamental shift in how work gets done. Expert: Absolutely. It’s about democratizing digital innovation. Host: Fantastic insights, Alex. To sum it up for our listeners: the developer shortage is a major roadblock, but simply buying low-code tools isn't the answer. Host: This study highlights the need for a clear strategy, one that uses flexible guardrails, builds a supportive culture, and transforms the relationship between business and IT from a source of friction to a true partnership. Host: Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping the future of business.
Citizen Development, Low-Code, No-Code, Digital Transformation, IT Strategy, Governance Framework, Upskilling
The Promise and Perils of Low-Code AI Platforms
Maria Kandaurova, Daniel A. Skog, Petra M. Bosch-Sijtsema
This study investigates the adoption of a low-code conversational Artificial Intelligence (AI) platform within four multinational corporations. Through a case study approach, the research identifies significant challenges that arise from fundamental, yet incorrect, assumptions about low-code technologies. The paper offers recommendations for companies to better navigate the implementation process and unlock the full potential of these platforms.
Problem
As businesses increasingly turn to AI for process automation, they often encounter significant hurdles during adoption. Low-code AI platforms are marketed as a solution to simplify this process, but there is limited research on their real-world application. This study addresses the gap by showing how companies' false assumptions about the ease of use, adaptability, and integration of these platforms can limit their effectiveness and return on investment.
Outcome
- The usability of low-code AI platforms is often overestimated; non-technical employees typically face a much steeper learning curve than anticipated and still require a foundational level of coding and AI knowledge. - Adapting low-code AI applications to specific, complex business contexts is challenging and time-consuming, contrary to the assumption of easy tailoring. It often requires significant investment in standardizing existing business processes first. - Integrating low-code platforms with existing legacy systems and databases is not a simple 'plug-and-play' process. Companies face significant challenges due to incompatible data formats, varied interfaces, and a lack of a comprehensive data strategy. - Successful implementation requires cross-functional collaboration between IT and business teams, thorough platform testing before procurement, and a strategic approach to reengineering business processes to align with AI capabilities.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a very timely topic for any business looking to innovate: the real-world challenges of adopting new technology. We’ll be discussing a fascinating study titled "The Promise and Perils of Low-Code AI Platforms." Host: This study looks at how four major corporations adopted a low-code conversational AI platform, and it uncovers some crucial, and often incorrect, assumptions that businesses make about these powerful tools. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Businesses are constantly hearing about AI and automation. What’s the core problem that these low-code AI platforms are supposed to solve? Expert: The problem is a classic one: a gap between ambition and resources. Companies want to automate processes, build chatbots, and leverage AI, but they often lack large teams of specialized AI developers. Low-code platforms are marketed as the perfect solution. Host: The 'democratization' of AI we hear so much about. Expert: Exactly. The promise is that you can use a simple, visual, drag-and-drop interface to build complex AI applications, empowering your existing business-focused employees to innovate without needing to write a single line of code. But as the study found, that promise often doesn't match the reality. Host: So how did the researchers investigate this gap between promise and reality? Expert: They took a very practical approach. They didn't just survey people; they conducted an in-depth case study. They followed the journey of four large multinational companies—in the energy, automotive, and retail sectors—as they all tried to implement the very same low-code conversational AI platform. Host: That’s great. So by studying the same platform across different industries, they could really pinpoint the common challenges. What were the main findings? Expert: The findings centered on three major false assumptions businesses made. The first was about usability. The assumption was that ‘low-code’ meant anyone could do it. Host: And that wasn't the case? Expert: Not at all. While the IT staff found it user-friendly, the business-side employees—the ones who were supposed to be empowered—faced a much steeper learning curve than anyone anticipated. One domain expert in the study described the experience as being "like Greek," saying it was far more complex than just "dragging and dropping." Host: So you still need a foundational level of technical knowledge. What was the second false assumption? Expert: It was about adaptability. The idea was that you could easily tailor these platforms to any specific business need. But creating applications to handle complex, real-world customer queries proved incredibly challenging and time-consuming. Host: Why was that? Expert: Because real business processes are often messy and rely on human intuition. The study found that before companies could automate a process, they first had to invest heavily in understanding and standardizing it. You can't teach an AI a process that isn't clearly defined. Host: That makes sense. You have to clean your house before you can automate the cleaning. What was the final key finding? Expert: This one is huge for any CIO: integration. The belief was that these platforms would be a simple 'plug-and-play' solution that could easily connect to existing company databases and systems. Host: I have a feeling it wasn't that simple. Expert: Far from it. The companies ran into major roadblocks trying to connect the platform to their legacy systems. They faced incompatible data formats and a lack of a unified data strategy. The study showed that you often need someone with knowledge of coding and APIs to build the bridges between the new platform and the old systems. Host: So, Alex, this is the crucial part for our listeners. If a business leader is considering a low-code AI tool, what are the key takeaways? What should they do differently? Expert: The study provides a clear roadmap. First, thoroughly test the platform before you buy it. Don't just watch the vendor's demo. Have your actual employees—the business users—try to build a real-world application with it. This will reveal the true learning curve. Host: A 'try before you buy' approach. What else? Expert: Second, success requires cross-functional collaboration. It’s not an IT project or a business project; it's both. The study highlighted that the most successful implementations happened when IT experts and business domain experts worked together in blended teams from day one. Host: So break down those internal silos. Expert: Absolutely. And finally, be prepared to change your processes, not just your tools. You can't just layer AI on top of existing workflows. You need to re-evaluate and often redesign your processes to align with the capabilities of the AI. It's as much about business process re-engineering as it is about technology. Host: This is incredibly insightful. It seems low-code AI platforms are powerful, but they are certainly not a magic bullet. Host: To sum it up: the promise of simplicity with these platforms often hides significant challenges in usability, adaptation, and integration. Success depends less on the drag-and-drop interface and more on a strategic approach that involves rigorous testing, deep collaboration between teams, and a willingness to rethink your fundamental business processes. Host: Alex, thank you so much for shedding light on the perils, and the real promise, of these platforms. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning into A.I.S. Insights. We’ll see you next time.
Low-Code AI Platforms, Artificial Intelligence, Conversational AI, Implementation Challenges, Digital Transformation, Business Process Automation, Case Study
Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations
Robert M. Davison, Louie H. M. Wong, Steven Alter
This study explores how employees at a warehouse in Hong Kong utilize low-code/no-code principles with everyday tools like Microsoft Excel to create unofficial solutions. It examines these noncompliant but essential workarounds that compensate for the shortcomings of their mandated corporate software system. The research is based on a qualitative case study involving interviews with warehouse staff.
Problem
A global company implemented a standardized, non-customizable corporate system (Microsoft Dynamics) that was ill-suited for the unique logistical needs of its Hong Kong operations. This created significant operational gaps, particularly in delivery scheduling, leaving employees unable to perform critical tasks using the official software.
Outcome
- Employees effectively use Microsoft Excel as a low-code tool to create essential, noncompliant workarounds that are vital for daily operations, such as delivery management. - These employee-driven solutions, developed without formal low-code platforms or IT approval, become institutionalized and crucial for business success, highlighting the value of 'shadow IT'. - The study argues that low-code/no-code development is not limited to formal platforms and that managers should recognize, support, and govern these informal solutions. - Businesses are advised to adopt a portfolio approach to low-code development, leveraging tools like Excel alongside formal platforms, to empower employees and solve real-world operational problems.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Combining Low-Code/No-Code with Noncompliant Workarounds to Overcome a Corporate System's Limitations." Host: It explores how employees at a warehouse in Hong Kong used everyday tools, like Microsoft Excel, to create unofficial but essential solutions when their official corporate software fell short. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome back. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What was the real-world problem this study looked into? Expert: It’s a classic story of a global headquarters rolling out a one-size-fits-all solution. The company, called CoreRidge in the study, implemented a standardized corporate software, Microsoft Dynamics. Expert: The problem was, this system was completely non-customizable. It worked fine in most places, but it was a disaster for their Hong Kong operations. Host: A disaster how? What was so unique about Hong Kong? Expert: In Hong Kong, due to the high cost of real estate, the company has small retail stores and one large, central warehouse. The corporate software was designed for locations where the warehouse and store are together. Expert: It simply couldn't handle the complex delivery scheduling needed to get products from that single warehouse to all the different stores and customers. Core tasks were impossible to perform with the official system. Host: So employees were stuck. How did the researchers figure out what was happening? Expert: They went right to the source. It was a qualitative case study where they conducted in-depth interviews with 31 employees at the warehouse, from trainees all the way up to senior management. This gave them a ground-level view of how the team was actually getting work done. Host: And that brings us to the findings. What did they discover? Expert: They found that employees had essentially turned Microsoft Excel into their own low-code development tool. They were downloading data from the official system and using Excel to manage everything from delivery lists to rescheduling shipments during a typhoon. Host: So they built their own system, in a way. Expert: Exactly. And this wasn't a secret, rogue operation. These Excel workarounds became standard operating procedure. They were noncompliant with corporate IT policy, but they were absolutely vital for daily operations and customer satisfaction. The study calls this 'shadow IT', but frames it as a valuable, employee-driven innovation. Host: That’s a really interesting perspective. It sounds like the company should be celebrating these employees, not punishing them. Expert: That’s the core argument. The study suggests that this kind of informal, tool-based problem-solving is a legitimate form of low-code development. It’s not always about using a fancy, dedicated platform. Sometimes the best tool is the one your team already knows how to use. Host: This is the crucial part for our listeners. What are the key business takeaways here? Why does this matter? Expert: It matters immensely. First, it shows that managers need to recognize and support these informal solutions, not just shut them down. These workarounds are a goldmine of information about what's not working in your official systems. Host: So, don't fight 'shadow IT', but try to understand it? Expert: Precisely. The second major takeaway is that businesses should adopt a "portfolio approach" to low-code development. Don't just invest in one big platform. Empower your employees by recognizing the value of flexible, everyday tools like Excel. Expert: It’s about creating a governance structure that can embrace these informal solutions, manage their risks, and learn from them to make the whole organization smarter and more agile. Host: It sounds like a shift from rigid, top-down control to a more flexible, collaborative approach to technology. Expert: That's it exactly. It's about trusting your employees on the front lines to solve the problems they face every day, with the tools they have at hand. Host: So, to summarize: a rigid corporate system can fail to meet local needs, but resourceful employees can bridge the gap using everyday tools like Excel. And the big lesson for businesses is to recognize, govern, and learn from these informal innovations rather than just trying to eliminate them. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world, powered by Living Knowledge.
Low-Code/No-Code, Workarounds, Shadow IT, Citizen Development, Enterprise Systems, Case Study, Microsoft Excel
Governing Citizen Development to Address Low-Code Platform Challenges
Altus Viljoen, Marija Radić, Andreas Hein, John Nguyen, Helmut Krcmar
This study investigates how companies can effectively manage 'citizen development'—where employees with minimal technical skills use low-code platforms to build applications. Drawing on 30 interviews with citizen developers and platform experts across two firms, the research provides a practical governance framework to address the unique challenges of this approach.
Problem
Companies face a significant shortage of skilled software developers, leading them to adopt low-code platforms that empower non-IT employees to create applications. However, this trend introduces serious risks, such as poor software quality, unmonitored development ('shadow IT'), and long-term maintenance burdens ('technical debt'), which organizations are often unprepared to manage.
Outcome
- Citizen development introduces three primary risks: substandard software quality, shadow IT, and technical debt. - Effective governance requires a more nuanced understanding of roles, distinguishing between 'traditional citizen developers' and 'low-code champions,' and three types of technical experts who support them. - The study proposes three core sets of recommendations for governance: 1) strategically manage project scope and complexity, 2) organize effective collaboration through knowledge bases and proper tools, and 3) implement targeted education and training programs. - Without strong governance, the benefits of rapid, decentralized development are quickly outweighed by escalating risks and costs.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating area where business and IT are blurring lines: citizen development. We’re looking at a new study titled "Governing Citizen Development to Address Low-Code Platform Challenges". Host: It investigates how companies can effectively manage employees who, with minimal technical skills, are now building their own applications using what are called low-code platforms. With me to break it all down is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let’s start with the big picture. Why are companies turning to their own non-technical employees to build software in the first place? What’s the problem this study is trying to solve? Expert: The core problem is a massive, ongoing shortage of skilled software developers. Companies have huge backlogs of IT projects, but they can't hire developers fast enough. So, they turn to low-code platforms, which are tools with drag-and-drop interfaces that let almost anyone build a simple application. Host: That sounds like a perfect solution. Democratize development and get things done faster. Expert: It sounds perfect, but the study makes it clear that this introduces a whole new set of serious risks that organizations are often unprepared for. They identified three major challenges. Host: And what are they? Expert: First is simply substandard software quality. An app built by someone in marketing might look fine, but as the study found, it could be running "slow queries" or be "badly planned," hurting the performance of the entire system. Expert: Second is the rise of 'shadow IT'. Employees build things on their own without oversight, which can lead to security issues, data protection breaches, or simply chaos. One developer in the study noted they had a role that was "almost as powerful as a normal developer" and could "damage a few things" if they weren't careful. Expert: And third is technical debt. An employee builds a useful tool, then they leave the company. The study asks, who maintains it? Often, nobody. Or people just keep creating duplicate apps, leading to a messy and expensive digital junkyard. Host: So, how did the researchers get to the bottom of this? What was their approach? Expert: They took a very practical, real-world approach. They conducted 30 in-depth interviews across two different firms. One was a company using a low-code platform, and the other was a company that actually provides a low-code platform. This gave them a 360-degree view from both the user and the expert perspective. Host: It sounds comprehensive. So, after all those conversations, what were the key findings? What's the solution here? Expert: The biggest finding is that simply having "developers" and "non-developers" is the wrong way to think about it. Effective governance requires a much more nuanced understanding of the roles people play. Host: What kind of roles did they find? Expert: They identified two key types of citizen developers. You have your 'traditional citizen developer,' who builds a simple app for their team. But more importantly, they found what they call 'low-code champions.' These are business users who become passionate experts and act as a bridge between their colleagues and IT. They become the "poster children" for the program. Host: That’s a powerful idea. So it’s about nurturing internal talent, not just letting everyone run wild. Expert: Exactly. And to support them, the study proposes a clear, three-part governance framework. First, strategically manage project scope. Don’t let citizen developers build highly complex, mission-critical systems. Guide them to appropriate, simpler use cases. Expert: Second, organize effective collaboration. This means creating a central knowledge base with answers to common questions and using standard collaboration tools so people aren't constantly reinventing the wheel or flooding experts with the same support tickets. Expert: And third, implement targeted education. This isn't just about teaching them to use the software. It’s about training on best practices, data security, and identifying those enthusiastic employees who can become your next 'low-code champions.' Host: This is the crucial part for our listeners. What does this all mean for business leaders? What are the key takeaways? Expert: The first takeaway is this: don't just buy a low-code platform, build a program around it. Governance isn't about restriction; it's about creating the guardrails for success. The study warns that without it, the benefits of speed are "quickly outweighed by escalating risks and costs." Expert: The second, and I think most important, is to actively identify and empower your 'low-code champions'. These people are your force multipliers. They can handle onboarding, answer basic questions, and promote best practices within their business units, which frees up your IT team to focus on bigger things. Expert: And finally, start small and be strategic. The goal of citizen development shouldn't be to replace your IT department, but to supplement it. Empowering a sales team to automate its own reporting workflow is a huge win. Asking them to rebuild the company’s CRM is a disaster waiting to happen. Host: Incredibly clear advice. The promise of empowering your workforce with these tools is real, but it requires a thoughtful strategy to avoid the pitfalls. Host: To summarize, success with citizen development hinges on a strong governance framework. That means strategically managing what gets built, organizing how people collaborate and get support, and investing in targeted education to create internal champions. Host: Alex Ian Sutherland, thank you so much for breaking down this complex topic into such actionable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights. We'll see you next time.
citizen development, low-code platforms, IT governance, shadow IT, technical debt, software quality, case study
How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant
Imke Grashoff, Jan Recker
This case study investigates how GuideCom, a medium-sized German software provider, utilized the Cognigy.AI low-code platform to create an AI-based smart assistant. The research follows the company's entire development process to identify the key ways in which low-code platforms enable and constrain AI development. The study illustrates the strategic trade-offs companies face when adopting this approach.
Problem
Small and medium-sized enterprises (SMEs) often lack the extensive resources and specialized expertise required for in-house AI development, while off-the-shelf solutions can be too rigid. Low-code platforms are presented as a solution to democratize AI, but there is a lack of understanding regarding their real-world impact. This study addresses the gap by examining the practical enablers and constraints that firms encounter when using these platforms for AI product development.
Outcome
- Low-code platforms enable AI development by reducing complexity through visual interfaces, facilitating cross-functional collaboration between IT and business experts, and preserving resources. - Key constraints of using low-code AI platforms include challenges with architectural integration into existing systems, ensuring the product is expandable for different clients and use cases, and managing security and data privacy concerns. - Contrary to the 'no-code' implication, existing software development skills are still critical for customizing solutions, re-engineering code, and overcoming platform limitations, especially during testing and implementation. - Establishing a strong knowledge network with the platform provider (for technical support) and innovation partners like clients (for domain expertise and data) is a crucial factor for success. - The decision to use a low-code platform is a strategic trade-off; it significantly lowers the barrier to entry for AI innovation but requires careful management of platform dependencies and inherent constraints.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study called "How GuideCom Used the Cognigy.AI Low-Code Platform to Develop an AI-Based Smart Assistant". Host: It explores how a medium-sized company built its first AI product using a low-code platform, and what that journey reveals about the strategic trade-offs of this popular approach. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. What's the real-world problem this study is tackling? Expert: The problem is something many businesses, especially small and medium-sized enterprises or SMEs, are facing. They know they need to adopt AI to stay competitive, but they often lack the massive budgets or specialized teams of data scientists and AI engineers to build solutions from scratch. Host: And I imagine off-the-shelf products can be too restrictive? Expert: Exactly. They’re often not a perfect fit. Low-code platforms promise a middle ground—a way to "democratize" AI development. But there's been a gap in understanding what really happens when a company takes this path. This study fills that gap. Host: So how did the researchers approach this? What did they do? Expert: They conducted an in-depth case study. They followed a German software provider, GuideCom, for over 16 months as they developed their first AI product—a smart assistant for HR services—using a low-code platform called Cognigy.AI. Host: It sounds like they had a front-row seat to the entire process. So, what were the key findings? Did the low-code platform live up to the hype? Expert: It was a story of enablers and constraints. On the positive side, the platform absolutely enabled AI development. Its visual, drag-and-drop interface dramatically reduced complexity. Host: How did that help in practice? Expert: It was crucial for fostering collaboration. Suddenly, the business experts from the HR department could work directly with the IT developers. They could see the logic, understand the process, and contribute meaningfully, which is often a huge challenge in tech projects. It also saved a significant amount of resources. Host: That sounds fantastic. But you also mentioned constraints. What were the challenges? Expert: The constraints were very real. The first was architectural integration. Getting the AI tool, built on an external platform, to work smoothly with GuideCom’s existing software suite was a major hurdle. Host: And what else? Expert: Security and expandability. They needed to ensure the client’s data was secure, and they wanted the product to be scalable for many different clients, each with unique needs. The platform had limitations that made this complex. Host: So 'low-code' doesn't mean 'no-skills needed'? Expert: That's perhaps the most critical finding. GuideCom's existing software development skills were absolutely essential. They had to write custom code and re-engineer parts of the solution to overcome the platform's limitations and meet their security and integration needs. The promise of 'no-code' wasn't the reality. Host: This brings us to the most important question for our listeners: why does this matter for business? What are the practical takeaways? Expert: The biggest takeaway is that adopting a low-code AI platform is a strategic trade-off, not a magic bullet. It brilliantly lowers the barrier to entry, allowing companies to start innovating with AI without a massive upfront investment. That’s a game-changer. Host: But there's a 'but'. Expert: Yes. But you must manage the trade-offs. Firstly, you become dependent on the platform provider, so you need to choose your partner carefully. Secondly, you cannot neglect in-house technical skills. You still need people who can code to handle customization and integration. Host: The study also mentioned the importance of partnerships, didn't it? Expert: It was a crucial factor for success. GuideCom built a strong knowledge network. They had a close relationship with the platform provider, Cognigy, for technical support, and they partnered with a major bank as their first client. This client provided invaluable domain expertise and real-world data to train the AI. Host: A powerful combination of technical and business partners. Expert: Precisely. You need both to succeed. Host: This has been incredibly insightful. So to summarize for our listeners: Low-code platforms can be a powerful gateway for companies to start building AI solutions, as they reduce complexity and foster collaboration. Host: However, it's a strategic trade-off. Businesses must be prepared for challenges with integration and security, retain in-house software skills for customization, and build a strong network with both the platform provider and innovation partners. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the future of business and technology.
low-code development, AI development, smart assistant, conversational AI, case study, digital transformation, SME
EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH
Abdul Sesay, Elena Karahanna, and Marie-Claude Boudreau
This study investigates how the effects of new technology, specifically body-worn cameras (BWCs), unfold within organizations over time. Using a multi-site case study of three U.S. police departments, the research develops a process model to explain how the consequences of IT implementation emerge. The study identifies three key phases in this process: individuation (selecting the technology and related policies), composition (combining the technology with users), and actualization (using the technology in real-world interactions).
Problem
When organizations implement new technology, the results are often unpredictable, with outcomes varying widely between different settings. Existing research has not fully explained why a technology can be successful in one organization but fail in another. This study addresses the gap in understanding how the consequences of a new technology, like police body-worn cameras, actually develop and evolve into established organizational practices.
Outcome
- The process through which technology creates new behaviors and practices is complex and non-linear, occurring in three distinct phases (individuation, composition, and actualization). - Successful implementation is not guaranteed; it depends on the careful alignment of the technology itself (material components) with policies, training, and user adoption (expressive components) at each stage. - The study found that of the three police departments, only one successfully implemented body cameras because it carefully selected high-quality equipment, developed specific policies for its use, and ensured officers were trained and held accountable. - The other two departments experienced failure or delays due to poor quality equipment, generic policies, and inconsistent use, which prevented new, positive practices from taking hold. - The model shows that outcomes emerge over time and may require continuous adjustments, demonstrating that success is an ongoing process, not a one-time event.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating question that plagues nearly every organization: why do some technology projects succeed while others fail? With me is our expert analyst, Alex Ian Sutherland, who has been looking into a study on this very topic. Host: Alex, welcome to the show. Expert: Great to be here, Anna. Host: The study we're discussing is titled, "EMERGENCE OF IT IMPLEMENTATION CONSEQUENCES IN ORGANIZATIONS: AN ASSEMBLAGE APPROACH." Can you start by telling us what it's all about? Expert: Absolutely. In simple terms, this study investigates how the real-world effects of a new technology unfold over time. It uses the rollout of body-worn cameras in three different U.S. police departments to create a model that explains how you get from just buying a new gadget to it actually changing how people work. Host: And this is a huge issue for businesses. You invest millions in a new system, and the results can be completely unpredictable. Expert: That's the core problem the study addresses. Why can the exact same technology be a game-changer in one organization but a total flop in the one next door? Existing theories haven’t fully explained this variation. The researchers wanted to understand the step-by-step process of how the consequences of new tech, whether good or bad, actually emerge. Host: So how did they go about studying this? What was their approach? Expert: They conducted a multi-site case study, deeply embedding themselves in three different police departments—a large urban one, a mid-sized suburban one, and a small-town one. Instead of just looking at the technology itself, they looked at how it was combined with policies, training, and the officers who had to use it every day. Host: It sounds like they were looking at the entire ecosystem, not just the device. So, what were the key findings? Expert: The study found that the process happens in three distinct phases. The first is what they call ‘individuation’. This is the selection phase—choosing the right cameras and, just as importantly, writing the specific policies for how they should be used. Host: Okay, so the planning and purchasing stage. What's next? Expert: Next is ‘composition’. This is where the tech meets the user. It's about physically combining the camera with the officer, providing training, and making sure the two can function together seamlessly. It’s about building a new combined unit: the officer-with-a-camera. Host: And the final phase? Expert: That’s ‘actualization’. This is when the technology is used in real-world situations, during interactions with the public. This is where new behaviors, like improved communication or more consistent evidence gathering, either become routine and successful, or the whole thing falls apart. Host: And did they see different outcomes across the three police departments? Expert: Dramatically different. Only one department truly succeeded. They carefully selected high-quality equipment after a pilot program, developed very specific policies with stakeholder input, and had strict training and accountability. The other two departments failed or faced major delays. Host: Why did they fail? Expert: For predictable reasons, in hindsight. One used subpar, unreliable cameras that often malfunctioned. Both used generic policies that weren't tailored to body cameras at all. In one case, the policy didn't even mention body cameras. This misalignment between the technology and the rules meant that positive new practices never took hold. Host: This is the crucial part, Alex. What does a study about police body cameras mean for a business leader rolling out a new CRM, an AI tool, or any other major tech platform? Expert: It means everything. The first big takeaway is that successful implementation is a process, not a purchase. You can't just buy the "best" software and expect magic. You have to manage each phase. Host: And what about that link between the tech and the policies? Expert: That’s the second key takeaway. You must align what the study calls the ‘material components’—the tech itself—with the ‘expressive components,’ which are your policies, training, and culture. A new sales tool is useless if the sales team isn't trained on it or if compensation plans don't encourage its use. The technology and the human systems must be designed together. Host: So it's a continuous process of alignment. Expert: Exactly, which leads to the third point: success is not a one-time event. The study's model shows that outcomes emerge over time and often require tweaks and course correction. The departments that failed couldn't adapt to the problems of poor equipment or bad policy. A successful business needs to build in feedback loops to learn and adjust as they go. Host: So to summarize: implementing new technology isn't about the tech alone. It's a complex, multi-phase process that requires a deep alignment between the tools you choose and the rules, training, and people who use them. And you have to be ready to adapt along the way. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world.
IT implementation, Assemblage theory, body-worn camera, organizational change, police technology, process model
SUPPORTING COMMUNITY FIRST RESPONDERS IN AGING IN PLACE: AN ACTION DESIGN FOR A COMMUNITY-BASED SMART ACTIVITY MONITORING SYSTEM
Carmen Leong, Carol Hsu, Nadee Goonawardene, Hwee-Pink Tan
This study details the development of a smart activity monitoring system designed to help elderly individuals live independently at home. Using a three-year action design research approach, it deployed a sensor-based system in a community setting to understand how to best support community first responders—such as neighbors and volunteers—who lack professional healthcare training.
Problem
As the global population ages, more elderly individuals wish to remain in their own homes, but this raises safety concerns like falls or medical emergencies going unnoticed. This study addresses the specific challenge of designing monitoring systems that provide remote, non-professional first responders with the right information (situational awareness) to accurately assess an emergency alert and respond effectively.
Outcome
- Technology adaptation alone is insufficient; the system design must also encourage the elderly person to adapt their behavior, such as carrying a beacon when leaving home, to ensure data accuracy. - Instead of relying on simple automated alerts, the system should provide responders with contextual information, like usual sleep times or last known activity, to support human-based assessment and reduce false alarms. - To support teams of responders, the system must integrate communication channels, allowing all actions and updates related to an alert to be logged in a single, closed-loop thread for better coordination. - Long-term activity data can be used for proactive care, helping identify subtle changes in behavior (e.g., deteriorating mobility) that may signal future health risks before an acute emergency occurs.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a topic that affects millions of families worldwide: helping our elderly loved ones live safely and independently in their own homes. Host: We’ll be exploring a fascinating study titled "SUPPORTING COMMUNITY FIRST RESPONDERS IN AGING IN PLACE: AN ACTION DESIGN FOR A COMMUNITY-BASED SMART ACTIVITY MONITORING SYSTEM". Host: To help us unpack this is our analyst, Alex Ian Sutherland. Alex, welcome to the show. Expert: Thanks for having me, Anna. Host: So, Alex, this study details the development of a smart activity monitoring system. In simple terms, what's it all about? Expert: It’s about using simple, in-home sensors not just for the elderly person, but specifically to support the friends, neighbors, and volunteers—the community first responders—who check in on them. These are people with big hearts, but no formal medical training. Host: That’s a crucial distinction. Let's start with the big problem this study is trying to solve. Expert: The problem is a global one. We have an aging population, and the vast majority of seniors want to 'age in place'—to stay in their own homes. But this creates a safety concern. A fall or a sudden medical issue could go unnoticed for hours, or even days. Host: That’s a terrifying thought for any family. Expert: Exactly. The challenge this study tackles is how to give those community responders the right information, at the right time, so they can effectively help without being overwhelmed. The initial systems they looked at had major issues. Host: What kind of issues? Expert: Three big ones. First, unreliable data. A sensor might be in the wrong place and miss activity. Second, a massive number of false alarms. An alert would be triggered if someone was just napping or sitting quietly, leading to what we call 'alarm fatigue'. Host: And the third? Expert: Fragmented communication. A responder might get an SMS alert, then have to jump over to a WhatsApp group to discuss it with other volunteers. It was confusing and inefficient, especially in an emergency. Host: So how did the researchers approach such a complex, human-centered problem? Expert: They used a method called action design research. It’s very hands-on. They didn't just design a system in a lab; they deployed it in a real community in Singapore for three years. Expert: They would release a version of the system, get direct feedback from the elderly residents and the volunteer responders, see what worked and what didn't, and then use that feedback to build a better version. They went through several of these iterative cycles. Host: So they were learning and adapting in the real world. What were some of the key findings that came out of this process? Expert: The first finding was a bit counterintuitive. It’s not just about adapting the technology to the person; the person also has to adapt to the technology. Host: What do you mean? Expert: Well, a door sensor is great for knowing if someone has left the house. But if the person just pops next door to a neighbor's and leaves their own door open, the system incorrectly assumes they're still home. This could lead to a false inactivity alarm later. Expert: The solution was a partnership. They introduced a small, portable beacon the resident could carry when they left home. The user’s small behavioral change made the whole system much more accurate. Host: It's a two-way street. That makes sense. What else did they find? Expert: The second major finding was that context is more valuable than just an alert. A simple message saying "Inactivity Detected" is stressful and not very helpful. Expert: So they redesigned the alerts to include context. For example, an alert might say: "Inactivity alert for Mrs. Tan. Last activity was in the bedroom at 10:15 PM. Her usual sleep time is 10 PM to 7 AM." Host: Ah, so the responder can make a much more informed judgment call. It's likely she's just asleep, not in distress. Expert: Precisely. It empowers human decision-making and dramatically cuts down on false alarms. Host: And you mentioned these responders often work in teams. How did the system evolve to support them? Expert: This was the third key finding: the need for integrated, closed-loop communication. They moved all communication into a single platform where each alert automatically created its own dedicated conversation thread. Expert: Everyone on the team could see the alert, see who claimed it, and follow all the updates in one place. Once the situation was resolved, the thread was closed. It made coordination seamless. Host: It sounds like they also uncovered an opportunity beyond just reacting to emergencies. Expert: They did. The final insight was about shifting from reactive to proactive care. Over months, the system collects a lot of data on daily routines. By visualizing this data, responders could spot subtle changes. Expert: For example, a gradual decrease in movement or more frequent nighttime trips to the bathroom could be early indicators of a developing health issue. This allows for proactive intervention before an acute emergency ever occurs. Host: This is incredibly insightful. So, Alex, let's get to the bottom line. Why does this matter for businesses, especially those in the tech or healthcare space? Expert: There are a few critical takeaways. First is the principle of human-centric design. For any IoT or health-tech product, you have to design for the entire system—the device, the user, and their social environment. User adaptation should be seen as a feature to be designed for, not a bug. Host: So it's about the whole experience, not just the gadget. Expert: Right. Second, data is for insight, not just alarms. The business value isn't in creating the loudest alarm; it's in providing rich, contextual information that augments human intelligence. Help your user make a better decision. Host: What about the business model itself? Expert: This study points towards a "Care-as-a-Service" model. It's not just about selling sensors. It's about providing a platform that enables an ecosystem of care, connecting individuals, community organizations, and volunteers. There are opportunities in platform management and data analytics. Expert: And finally, the biggest opportunity is the shift to preventative health. The future of this multi-billion dollar 'aging in place' market isn’t just emergency buttons. It’s using long-term data to predict and prevent health crises before they happen. That’s the frontier. Host: Fantastic. So, to recap: true innovation in this space means creating a partnership between the user and the technology, providing context to empower human judgment, building platforms that support care teams, and using data to shift from reaction to prevention. Host: Alex, thank you so much for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in. Join us next time on A.I.S. Insights, powered by Living Knowledge.
Activity monitoring systems, community-based model, elderly care, situational awareness, IoT, sensor-based monitoring systems, action design research
What it takes to control Al by design: human learning
Dov Te'eni, Inbal Yahav, David Schwartz
This study proposes a robust framework, based on systems theory, for maintaining meaningful human control over complex human-AI systems. The framework emphasizes the importance of continual human learning to parallel advancements in machine learning, operating through two distinct modes: a stable mode for efficient operation and an adaptive mode for learning. The authors demonstrate this concept with a method called reciprocal human-machine learning applied to a critical text classification system.
Problem
Traditional methods for control and oversight are insufficient for the complexity of modern AI technologies, creating a gap in ensuring that critical AI systems remain aligned with human values and goals. As AI becomes more autonomous and operates in volatile environments, there is an urgent need for a new approach to design systems that allow humans to effectively stay in control and adapt to changing circumstances.
Outcome
- The study introduces a framework for human control over AI that operates at multiple levels and in two modes: stable and adaptive. - Effective control requires continual human learning to match the pace of machine learning, ensuring humans can stay 'in the loop' and 'in control'. - A method called 'reciprocal human-machine learning' is presented, where humans and AI learn from each other's feedback in an adaptive mode. - This approach results in high-performance AI systems that are unbiased and aligned with human values. - The framework provides a model for designing control in critical AI systems that operate in dynamic environments.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I'm your host, Anna Ivy Summers. Host: Today, we’re diving into a critical question for any organization using artificial intelligence: How do we actually stay in control? We'll be discussing a fascinating study titled, "What it takes to control AI by design: human learning." Host: It proposes a new framework for maintaining meaningful human control over complex AI systems, emphasizing that for AI to learn, humans must learn right alongside it. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Thanks for having me, Anna. It’s a crucial topic. Host: Absolutely. So, Alex, let's start with the big picture. What is the real-world problem this study is trying to solve? Expert: The problem is that AI is evolving much faster than our methods for managing it. Think about critical systems in finance, cybersecurity, or logistics. We use AI to make high-stakes decisions at incredible speed. Expert: But our traditional methods of oversight, where a person just checks the final output, are no longer enough. As the study points out, AI can alter its behavior or generate unexpected results when it encounters new situations, creating a huge risk that it no longer aligns with our original goals. Host: So there's a growing gap between the AI's capability and our ability to control it. How did the researchers approach this challenge? Expert: They took a step back and used systems theory. Instead of seeing the human and the AI as separate, they designed a single, integrated system that operates in two distinct modes. Expert: First, there's the 'stable mode'. This is when the AI is working efficiently on its own, handling routine tasks based on what it already knows. Think of it as the AI on a well-defined autopilot. Expert: But when the environment changes or the AI's confidence drops, the system shifts into an 'adaptive mode'. This is a collaborative learning session, where the human expert and the AI work together to make sense of the new situation. Host: That’s a really clear way to put it. What were the main findings that came out of this two-mode approach? Expert: The first key finding is that this dual-mode structure is essential. You get the efficiency of automation in the stable mode, but you have a built-in, structured way to adapt and learn when faced with uncertainty. Host: And I imagine the human is central to that adaptive mode. Expert: Exactly. And that’s the second major finding: for this to work, human learning must keep pace with machine learning. To stay in control, the human expert can't be a passive observer. They must be actively learning and updating their own understanding of the environment. Host: That turns the typical human-in-the-loop idea on its head a bit. Expert: It does. Which leads to the third and most interesting finding, a method they call 'reciprocal human-machine learning'. In the adaptive mode, it’s not just the human teaching the machine. The AI provides specific feedback to the human expert, pointing out patterns or inconsistencies they might have missed. Expert: So, the human and the AI are actively learning from each other. This reciprocal feedback loop ensures the entire system gets smarter, performs better, and stays aligned with human values, preventing things like algorithmic bias from creeping in. Host: A true partnership. This is where it gets really interesting for our listeners. Alex, why does this matter for business? What are the practical takeaways? Expert: This framework is a roadmap for de-risking advanced AI applications. For any business using AI in critical functions, this is a way to ensure safety, accountability, and alignment with company ethics. It's about moving from a "black box" to a controllable, transparent system. Expert: Second, it's about building institutional knowledge. By keeping humans actively engaged in the learning process, you're not just improving the AI; you're upskilling your employees. They develop a deeper expertise that makes your entire operation more resilient and adaptable. Expert: And finally, that adaptability is a huge competitive advantage. A business with a human-AI system that can learn and respond to market shifts, new cyber threats, or supply chain disruptions will outperform one with a rigid, static AI every time. Host: So to recap: traditional AI oversight is failing. This study presents a powerful framework where a human-AI system operates in a stable mode for efficiency and an adaptive mode for learning. Host: The key is that this learning must be reciprocal—a two-way street where both human and machine get smarter together, ensuring the AI remains a powerful, controllable, and trusted tool for the business. Host: Alex, thank you so much for these valuable insights. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping our world.
Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity
Dennis F. Galletta, Gregory D. Moody, Paul Benjamin Lowry, Robert Willison, Scott Boss, Yan Chen, Xin “Robert” Luo, Daniel Pienta, Peter Polak, Sebastian Schuetze, and Jason Thatcher
This study explores how to improve cybersecurity by focusing on the human element. Based on interviews with C-level executives and prior experimental research, the paper proposes a strategy for communicating cyber threats that balances making employees aware of the dangers (fear) with building their confidence (efficacy) to handle those threats effectively.
Problem
Despite advanced security technology, costly data breaches continue to rise because human error remains the weakest link. Traditional cybersecurity training and policies have proven ineffective, indicating a need for a new strategic approach to manage human risk.
Outcome
- Human behavior is the primary vulnerability in cybersecurity, and conventional training programs are often insufficient to address this risk. - Managers must strike a careful balance in their security communications: instilling a healthy awareness of threats ('survival fear') without causing excessive panic or anxiety, which can be counterproductive. - Building employees' confidence ('efficacy') in their ability to identify and respond to threats is just as crucial as making them aware of the dangers. - Effective tools for changing behavior include interactive methods like phishing simulations that provide immediate feedback, gamification, and fostering a culture where security is a shared responsibility. - The most effective approach is to empower users by providing them with clear, simple tools and the knowledge to act, rather than simply punishing mistakes or overwhelming them with fear.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we’re looking at a critical issue that costs businesses billions: cybersecurity. But we're not talking about firewalls and encryption; we’re talking about people. Host: We're diving into a fascinating new study titled "Balancing fear and confidence: A strategic approach to mitigating human risk in cybersecurity." It proposes a new strategy for communicating cyber threats, one that balances making employees aware of dangers with building their confidence to handle them. Host: Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. We invest so much in security technology, yet we keep hearing about massive, costly data breaches. What's the core problem this study addresses? Expert: The core problem is that despite all our advanced tech, the human element remains the weakest link. The study highlights that data breaches are not only increasing, they’re getting more expensive, averaging nearly 9.5 million dollars per incident in 2023. Host: Nine and a half million dollars. That’s staggering. Expert: It is. And the research points out that about 90% of all data breaches result from internal causes like simple employee error or negligence. So, the traditional approach—annual training videos and dense policy documents—clearly isn't working. We need a strategic shift. Host: So how did the researchers approach this? It sounds like a complex human problem. Expert: It is, and they took a very practical approach. They combined findings from their own prior experiments on how people react to threats with a series of in-depth interviews. They spoke directly with ten C-level executives—CISOs and CIOs—from major companies in healthcare, retail, and manufacturing. Host: So, this isn't just theory. They went looking for a reality check from leaders on the front lines. Expert: Exactly. They wanted to know what actually works in the real world when it comes to motivating employees to be more secure. Host: Let’s get to their findings. What was the most significant discovery? Expert: The biggest takeaway is the need for a delicate balance. Managers need to instill what the study calls a healthy 'survival fear'—an awareness of real threats—without causing panic or anxiety, which just makes people shut down. Host: 'Survival fear' is an interesting term. Can you explain that a bit more? Expert: Think of it like teaching a child not to touch a hot stove. You want them to have a healthy respect for the danger, not to be terrified of the kitchen. One executive described it as an "inverted U" relationship: too little fear leads to complacency, but too much leads to paralysis where employees are too scared to do their jobs. Host: So you make them aware of the threat, but then what? You can’t just leave them feeling anxious. Expert: And that’s the other half of the equation: building their confidence, or what the study calls 'efficacy.' It’s just as crucial to empower employees with the belief that they can actually identify and respond to a threat. Fear gets their attention, but confidence is what drives the right action. Host: What did the study find were the most effective tools for building that confidence? Expert: The executives universally praised interactive methods over passive ones. The most effective tool by far was phishing simulations. These are fake phishing emails sent to employees. When someone clicks, they get immediate, private feedback explaining what they missed. It's a safe way to learn from mistakes. Host: It sounds much more engaging than a PowerPoint presentation. Expert: Absolutely. Gamification, like leaderboards for spotting threats, also works well. The key is moving away from a culture of punishment and toward a culture of shared responsibility, where reporting a suspicious email is seen as a positive, helpful action. Host: This is the critical part for our listeners. Alex, what are the practical takeaways for a business leader who wants to strengthen their company's human firewall? Expert: There are three key actions. First, reframe your communication. Stop leading with fear and punishment. Instead, focus on empowerment. The goal is to instill that healthy ‘survival fear’ about the consequences, but immediately follow it with simple, clear actions employees can take to protect themselves and the company. Host: So, it's not "don't do this," but "here's how you can be a hero." Expert: Precisely. The second takeaway is to make security easy. The executives pointed to the success of simple tools, like a "report this email" button that takes just one click. If security is inconvenient, people will find ways around it. Remove the friction from doing the right thing. Host: And the third action? Expert: Make your training relevant and continuous. Ditch the generic, annual "check-the-box" training that employees just play in the background. Use those phishing simulations, create short, engaging content, and tailor it to different teams. The threats are constantly evolving, so your training has to as well. Host: So, to summarize, it seems the old model of just telling employees the rules is broken. Host: The new approach is a delicate balance: make people aware of the risks, but immediately empower them with the confidence and the simple tools they need to become an active line of defense. It's about culture, not just controls. Host: Alex, this has been incredibly insightful. Thank you for making this complex topic so clear. Expert: My pleasure, Anna. Host: And thanks to all of you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we translate another key piece of research into actionable business strategy.
Cybersecurity, Human Risk, Fear Appeals, Security Awareness, User Actions, Management Interventions, Data Breaches
Design Knowledge for Virtual Learning Companions from a Value-centered Perspective
Ricarda Schlimbach, Bijan Khosrawi-Rad, Tim C. Lange, Timo Strohmann, Susanne Robra-Bissantz
This study develops design principles for Virtual Learning Companions (VLCs), which are AI-powered chatbots designed to help students with motivation and time management. Using a design science research approach, the authors conducted interviews, workshops, and built and tested several prototypes with students. The research aims to create a framework for designing VLCs that not only provide functional support but also build a supportive, companion-like relationship with the learner.
Problem
Working students in higher education often struggle to balance their studies with their jobs, leading to challenges with motivation and time management. While conversational AI like ChatGPT is becoming common, these tools often lack the element of companionship and a holistic approach to learning support. This research addresses the gap in how to design AI learning tools that effectively integrate motivation, time management, and relationship-building from a user-value-centered perspective.
Outcome
- The study produced a comprehensive framework for designing Virtual Learning Companions (VLCs), resulting in 9 design principles, 28 meta-requirements, and 33 design features. - The findings are structured around a “value-in-interaction” model, which proposes that a VLC's value is created across three interconnected layers: the Relationship Layer, the Matching Layer, and the Service Layer. - Key design principles include creating a human-like and adaptive companion, enabling proactive and reactive behavior, building a trustworthy relationship, providing supportive content, and fostering a motivational and ethical learning environment. - Evaluation of a coded prototype revealed that different student groups have different preferences, emphasizing that VLCs must be adaptable to their specific educational context and user needs to be effective.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research to real-world business strategy, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re exploring a topic that’s becoming increasingly relevant in our AI-driven world: how to make our digital tools not just smarter, but more supportive. We’re diving into a study titled "Design Knowledge for Virtual Learning Companions from a Value-centered Perspective".
Host: In simple terms, it's about creating AI-powered chatbots that act as true companions, helping students with the very human challenges of motivation and time management. Here to break it all down for us is our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna. It’s a fascinating study with huge implications.
Host: Let's start with the big picture. What is the real-world problem that this study is trying to solve?
Expert: Well, think about anyone trying to learn something new while juggling a job and a personal life. It could be a university student working part-time or an employee trying to upskill. The biggest hurdles often aren't the course materials themselves, but staying motivated and managing time effectively.
Host: That’s a struggle many of our listeners can probably relate to.
Expert: Exactly. And while we have powerful AI tools like ChatGPT that can answer questions, they function like a know-it-all tutor. They provide information, but they don't provide companionship. They don't check in on you, encourage you when you're struggling, or help you plan your week. This study addresses that gap.
Host: So it's about making AI more of a partner than just a tool. How did the researchers go about figuring out how to build something like that?
Expert: They used a very hands-on approach called design science research. Instead of just theorizing, they went through multiple cycles of building and testing. They started by conducting in-depth interviews with working students to understand their real needs. Then, they held workshops, designed a couple of conceptual prototypes, and eventually built and coded a fully functional AI companion that they tested with different student groups.
Host: So it’s a methodology that’s really grounded in user feedback. What were the key findings? What did they learn from all this?
Expert: The main outcome is a powerful framework for designing these Virtual Learning Companions, or VLCs. The big idea is that the companion's value is created through the interaction itself, which they break down into three distinct but connected layers.
Host: Three layers. Can you walk us through them?
Expert: Of course. First is the Relationship Layer. This is all about creating a human-like, trustworthy companion. The AI should be able to show empathy, maybe use a bit of humor, and build a sense of connection with the user over time. It’s the foundation.
Host: Okay, so it’s about the personality and the bond. What's next?
Expert: The second is the Matching Layer. This is about adaptation and personalization. The study found that a one-size-fits-all approach fails. The VLC needs to adapt to the user's individual learning style, their personality, and even their current mood or context.
Host: And the third layer?
Expert: That's the Service Layer. This is where the more functional support comes in. It includes features for time management, like creating to-do lists and setting reminders, as well as providing supportive learning content and creating a motivational environment, perhaps with gentle nudges or rewards.
Host: This all sounds great in theory, but did they see it work in practice?
Expert: They did, and they also uncovered a critical insight. When they tested their prototype, they found that full-time university students thought the AI’s language was too informal and colloquial. But a group of working professionals in a continuing education program found the exact same AI to be too formal!
Host: Wow, that’s a direct confirmation of what you said about the Matching Layer. The companion has to be adaptable.
Expert: Precisely. It proves that to be effective, these tools must be tailored to their specific audience and context.
Host: Alex, this is the crucial part for our audience. Why does this matter for business? What are the practical takeaways?
Expert: The implications are huge, Anna, and they go way beyond the classroom. Think about corporate training and HR. Imagine a new employee getting an AI companion that doesn't just teach them software systems, but helps them manage the stress of their first month and checks in on their progress and motivation. That could have a massive impact on engagement and retention.
Host: I can see that. It’s a much more holistic approach to onboarding. Where else?
Expert: For any EdTech company, this framework is a blueprint for building more effective and engaging products. It's about moving from simple content delivery to creating a supportive learning ecosystem. But you can also apply these principles to customer-facing bots. An AI that can build a relationship and adapt to a customer's technical skill or frustration level will provide far better service and build long-term loyalty.
Host: So the key business takeaway is to shift our thinking.
Expert: Exactly. The value of AI in these roles isn't just in the functional task it completes, but in the supportive, adaptive relationship it builds with the user. It’s the difference between an automated tool and a true digital partner.
Host: A fantastic insight. So, to summarize: today's professionals face real challenges with motivation and time management. This study gives us a three-layer framework—Relationship, Matching, and Service—to build AI companions that truly help. For businesses, this opens up new possibilities in corporate training, EdTech, and even customer relations.
Host: Alex, thank you so much for translating this complex study into such clear, actionable insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in. This has been A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable knowledge for your business.
Conversational Agent, Education, Virtual Learning Companion, Design Knowledge, Value
REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION
Stefan Seidel, Christoph J. Frick, Jan vom Brocke
This study examines how various actors, including legal experts, government officials, and industry leaders, collaborated to create laws for new technologies like blockchain. Through a case study in Liechtenstein, it analyzes the process of developing a law on "trustworthy technology," focusing on how the participants collectively made sense of a complex and evolving subject to construct a new regulatory framework.
Problem
Governments face a significant challenge in regulating emerging digital technologies. They must create rules that prevent harmful effects and protect users without stifling innovation. This is particularly difficult when the full potential and risks of a new technology are not yet clear, creating regulatory gaps and uncertainty for businesses.
Outcome
- Creating effective regulation for new technologies is a process of 'collective prospective sensemaking,' where diverse stakeholders build a shared understanding over time. - This process relies on two interrelated activities: 'abstraction' and 'elaboration'. Abstraction involves generalizing the essential properties of a technology to create flexible, technology-neutral rules that encourage innovation. - Elaboration involves specifying details and requirements to provide legal certainty and protect users. - Through this process, the regulatory target can evolve significantly, as seen in the case study's shift from regulating 'blockchain/cryptocurrency' to a broader, more durable law for the 'token economy' and 'trustworthy technology'.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: On today’s episode, we're diving into the complex world of regulation for new technologies. We’re looking at a study titled "REGULATING EMERGING TECHNOLOGIES: PROSPECTIVE SENSEMAKING THROUGH ABSTRACTION AND ELABORATION". Host: The study examines how a diverse group of people—legal experts, government officials, and industry leaders—came together to create laws for a new technology, using blockchain in Liechtenstein as a case study. Here to help us unpack this is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: So Alex, let’s start with the big picture. What is the fundamental problem that governments and businesses face when a new technology like blockchain or A.I. emerges? Expert: It’s a classic case of trying to build the plane while you're flying it. Governments need to create rules to protect users and prevent harm, but they also want to avoid crushing innovation before it even gets off the ground. Host: The dreaded innovation killer. Expert: Exactly. The study highlights that this is incredibly difficult when no one fully understands the technology's potential or its risks. This creates what the authors call a "regulatory gap"—a gray area of uncertainty that can paralyze businesses. They don't know if their new business model is legal, so they hesitate to invest. Host: And how did the researchers in this study go about understanding this process? What was their approach? Expert: They conducted an in-depth case study in the European state of Liechtenstein. They essentially got a front-row seat to the entire law-making process for blockchain technology. Expert: They interviewed everyone involved—from the Prime Minister to tech startup CEOs to the financial regulators. They also analyzed hundreds of documents, including early strategy papers and evolving drafts of the law, to see how the thinking changed over time. Host: It sounds like they had incredible access. So, after all that observation, what were the key findings? What did they discover about how to create good regulation? Expert: The biggest finding is that it's a process of what they call 'collective prospective sensemaking'. That’s a fancy term for getting a diverse group of people in a room to build a shared vision of the future. It’s not about one person having the answer; it’s about creating it together. Host: And the study found this process hinges on two specific activities: 'abstraction' and 'elaboration'. Can you break those down for us? Expert: Of course. Think of 'abstraction' as zooming out. Initially, the group in Liechtenstein was focused on regulating "blockchain" and "cryptocurrency." But they realized that was too specific and would be outdated quickly. Expert: So, they abstracted. They asked, "What is the essential quality of this technology?" They landed on the idea of "trust." This allowed them to create a flexible, technology-neutral rule for any "trustworthy technology," not just blockchain. It future-proofed the law. Host: That’s a brilliant shift. So what about 'elaboration'? Expert: If abstraction is zooming out, 'elaboration' is zooming in. Once they had the big, abstract concept—trustworthy technology—they had to add the specific details. Expert: This meant defining roles, specifying requirements for service providers, and creating rules that would give businesses legal certainty and actually protect users. It's the process of giving the abstract idea real-world teeth. Host: So the target itself evolved dramatically through this process. Expert: It really did. They went from a narrow law about cryptocurrency to a broad, durable framework for what they called the "token economy." This was only possible because of that constant dance between the big-picture abstraction and the fine-detail elaboration. Host: This is fascinating, Alex, but let's get to the bottom line. Why does this study matter for business leaders listening right now, even if they aren't in the crypto space? Expert: This is the most crucial part. The study offers a powerful blueprint for how businesses should approach regulation for any emerging technology, whether it's A.I., quantum computing, or synthetic biology. Expert: The first takeaway is proactive engagement. Don't wait for regulation to happen *to* you. The industry leaders in this study who participated in the process helped shape a more innovation-friendly law. By being at the table, you can influence the outcome. Host: So get involved early and often. What else? Expert: Second, understand the power of language. The breakthrough in Liechtenstein happened when they shifted the conversation from a specific technology, blockchain, to a desired outcome, which was trust. For businesses, this is a key strategy: frame the conversation with regulators around the value you create, not just the tech you use. Host: It’s a narrative strategy, really. Expert: Precisely. And finally, this model provides predictability. The process of abstraction and elaboration creates a stable yet flexible framework. For businesses, that kind of regulatory environment is gold. It reduces uncertainty and gives you the confidence to invest and innovate for the long term. This is the path to avoiding that "gray space" we talked about earlier. Host: So to sum up, regulating new technology isn’t a top-down mandate; it's a collaborative journey. The key is to balance flexible, high-level principles with clear, specific rules. For businesses, the lesson is clear: get a seat at the table and help shape a predictable environment where innovation can thrive. Host: Alex Ian Sutherland, thank you for making such a complex topic so clear. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping business and technology.