International Conference on Wirtschaftsinformatik (2023)
Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics
Jeannette Stark, Thure Weimann, Felix Reinsch, Emily Hickmann, Maren Kählig, Carola Gißke, and Peggy Richter
This study reviews the psychological requirements for forming habits and analyzes how these requirements are implemented in existing mobile habit-tracking apps. Through a content analysis of 57 applications, the research identifies key design gaps and proposes a set of principles to inform the creation of more effective Digital Therapeutics (DTx) for long-term behavioral change.
Problem
Noncommunicable diseases (NCDs), a leading cause of death, often require sustained lifestyle and behavioral changes. While many digital apps aim to support habit formation, they often fail to facilitate the entire process, particularly the later stages where a habit becomes automatic and reliance on technology should decrease, creating a gap in effective long-term support.
Outcome
- Conventional habit apps primarily support the first two stages of habit formation: deciding on a habit and translating it into an initial behavior. - Most apps neglect the crucial later stages of habit strengthening, where technology use should be phased out to allow the habit to become truly automatic. - A conflict of interest was identified, as the commercial need for continuous user engagement in many apps contradicts the goal of making a user's new habit independent of the technology. - The research proposes specific design principles for Digital Therapeutics (DTx) to better support all four stages of habit formation, offering a pathway for developing more effective tools for NCD prevention and treatment.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge, the podcast where we translate complex research into actionable business strategy. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating study titled "Building Habits in the Digital Age: Incorporating Psychological Needs and Knowledge from Practitioners to Inform the Design of Digital Therapeutics". Host: With me is our expert analyst, Alex Ian Sutherland. Alex, in a nutshell, what is this study about? Expert: Hi Anna. This study looks at the psychology behind how we form habits and then analyzes how well current mobile habit-tracking apps actually support that process. It identifies some major design gaps and proposes a new set of principles for creating more effective health apps, known as Digital Therapeutics. Host: Let's start with the big picture problem. Why is building better habits so critical? Expert: It's a huge issue. The study highlights that noncommunicable diseases like diabetes and heart disease are the leading cause of death worldwide, and many are directly linked to our daily lifestyle choices. Host: So things like diet and exercise. And we have countless apps that promise to help us with that. Expert: We do, and that's the core of the problem this study addresses. While thousands of apps aim to help us build good habits, they often fail to support the entire journey. They're good at getting you started, but they don't help you finish. Host: What do you mean by "finish"? Isn't habit formation an ongoing thing? Expert: It is, but the end goal is for the new behavior to become automatic—something you do without thinking. The study finds that current apps often fail in those crucial later stages, where your reliance on technology should actually decrease, not increase. Host: That’s a really interesting point. How did the researchers go about studying this? Expert: Their approach was very methodical. First, they reviewed psychological research to map out a clear, four-stage model of habit formation. It starts with the decision to act and ends with the habit becoming fully automatic. Expert: Then, they performed a detailed content analysis of 57 popular habit-tracking apps. They downloaded them, used them, and systematically scored their features against the requirements of those four psychological stages. Host: And what were the key findings from that analysis? Expert: The results were striking. The vast majority of apps are heavily focused on the first two stages: deciding on a habit and starting the behavior. They excel at things like daily reminders and tracking streaks. Host: But they're missing the later stages? Expert: Almost completely. For example, the study found that not a single one of the 57 apps they analyzed had features to proactively phase out reminders or rewards as a user's habit gets stronger. They keep you hooked on the app's triggers. Host: Why would that be? It seems counterintuitive to the goal of forming a real habit. Expert: It is, and that points to the second major finding: a fundamental conflict of interest. The business model for most of these apps relies on continuous user engagement. They need you to keep opening the app every day. Expert: But the psychological goal of habit formation is for the behavior to become independent of the app. So the app’s commercial need is often directly at odds with the user's health goal. Host: Okay, this is the critical part for our listeners. What does this mean for businesses in the health-tech space? Why does this matter? Expert: It matters immensely because it reveals a massive opportunity. The study positions this as a blueprint for a more advanced category of apps called Digital Therapeutics, or DTx. Host: Remind us what those are. Expert: DTx are essentially "prescription apps"—software that is clinically validated and prescribed by a doctor to treat or prevent a disease. Because they have a clear medical purpose, their goal isn't just engagement; it's a measurable health outcome. Host: So they can be designed to make themselves obsolete for a particular habit? Expert: Precisely. A DTx doesn't need to keep a user forever. Its success is measured by the patient getting better. The study provides a roadmap with specific design principles for this, like building in features for "tapered reminding," where notifications fade out over time. Host: So the business takeaway is to shift the focus from engagement metrics to successful user "graduation"? Expert: Exactly. For any company in the digital health or wellness space, the future isn't just about keeping users, it's about proving you can create lasting, independent behavioral change. That is a far more powerful value proposition for patients, doctors, and insurance providers. Host: A fascinating perspective. So, to summarize: today's habit apps get us started but often fail at the finish line due to a conflict between their business model and our psychological needs. Host: This study, however, provides a clear roadmap for the next generation of Digital Therapeutics to bridge that gap, focusing on clinical outcomes rather than just app usage. Host: Alex, thank you for making that so clear for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more valuable insights from the world of research.
Behavioral Change, Digital Therapeutics, Habits, Habit Apps, Non-communicable diseases
Journal of the Association for Information Systems (2025)
Layering the Architecture of Digital Product Innovations: Firmware and Adapter Layers
Julian Lehmann, Philipp Hukal, Jan Recker, Sanja Tumbas
This study investigates how organizations integrate digital components into physical products to create layered architectures. Through a multi-year case study of a 3D printer company, it details the process of embedding firmware and creating adapter layers to connect physical hardware with higher-level software functionality.
Problem
As companies increasingly transform physical products into 'smart' digital innovations, they face the complex challenge of effectively integrating digital and physical components. There is a lack of clear understanding of how to structure this integration, which can limit a product's flexibility and potential for future upgrades.
Outcome
- The process of integrating digital and physical components is a bottom-up process, starting with making hardware controllable via software (a process called parametrizing). - The study identifies two key techniques for success: 1) parametrizing physical components through firmware, and 2) arranging digital functionality through higher-level adapter layers. - Creating 'adapter layers' is critical to bridge the gap between static physical components and flexible digital software, enabling them to communicate and work together. - This layered approach allows companies to innovate and add new features through software updates, enhancing product capabilities without needing to redesign the physical hardware.
Host: Welcome to A.I.S. Insights, the podcast where we connect academic research with real-world business strategy. I’m your host, Anna Ivy Summers. Today, we’re diving into a fascinating challenge: how do you successfully turn a traditional physical product into a smart, digitally-powered innovation?
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: We're discussing a study titled "Layering the Architecture of Digital Product Innovations: Firmware and Adapter Layers." In simple terms, it investigates how companies can effectively integrate digital components, like software, into physical products by creating a layered architecture. They looked at a 3D printer company to see how it’s done in practice.
Host: So Alex, let's start with the big problem. We see companies everywhere trying to make their products 'smart'—from smart toasters to smart cars. But the study suggests this is much harder than it looks. Why is it such a challenge?
Expert: It's a huge challenge because you can't just bolt a computer onto an old product and call it a day. The core issue, as the study on the 3D printer company PrintCo found, is that physical components are often designed in isolation. They aren't built to listen to or interact with digital technologies.
Expert: This creates a fundamental disconnect. Without a clear strategy for integration, a product’s potential is limited. It becomes rigid, difficult to upgrade, and you miss out on the flexibility that software can offer.
Host: So how did the researchers get an inside look at solving this problem? What was their approach?
Expert: They took a really practical approach. They conducted a multi-year case study of this company, PrintCo. They analyzed product documents, internal memos, and conducted interviews over a six-year period as the company evolved its 3D printers.
Expert: This allowed them to see, step-by-step, how PrintCo went from selling a basic, self-assembly kit to a sophisticated, software-integrated machine that could handle incredibly complex tasks. It provided a real-world blueprint for this transformation.
Host: Let's get to that blueprint. What were the key findings? What are the secret ingredients for successfully merging the physical and the digital?
Expert: The study uncovered two critical techniques. The first is what they call ‘parametrizing physical components’.
Host: That sounds a bit technical. What does it mean for a business audience?
Expert: Think of it as teaching the hardware to speak a digital language. You embed firmware—a type of low-level software—directly into the physical parts. This firmware defines parameters that software can control. For example, PrintCo wanted to solve the problem of printed objects warping as they cooled.
Expert: So, they added a heating element to the print bed. That's a physical change. But the key was parametrizing it—creating firmware that allowed higher-level software to precisely set and control the bed's temperature. The physical part was now addressable and controllable by code.
Host: Okay, so step one is making the hardware controllable. What’s the second technique?
Expert: The second is creating what the study calls 'adapter layers'. These are crucial. An adapter layer is essentially a bridge that connects the newly controllable hardware to the user-facing software. It translates complex hardware functions into simple, useful features.
Expert: For instance, PrintCo realized users struggled with the hundreds of settings required to get a perfect print. So they created an adapter layer in their software with preset 'print modes'—like a 'fast mode' or a 'high-quality mode'. Users just click a button, and the adapter layer tells the firmware exactly how to configure the hardware to achieve that result.
Host: So it’s a two-step process: first, teach the hardware to listen to software commands, and second, build a smart translator—an adapter layer—so the software can give meaningful instructions.
Expert: Exactly. And importantly, the study shows this is a bottom-up process. You have to get that foundational firmware layer right before you can build the really powerful software features on top.
Host: This is the most important question, Alex. Why does this matter for business? Why should a product manager or a CEO care about firmware and adapter layers?
Expert: Because this architecture is what separates a static product from a dynamic, evolving one. The first major business takeaway is future-proofing. This layered approach allows a company to add new capabilities and enhance performance through software updates, without needing a costly hardware redesign. PrintCo could add support for new materials or improve printing accuracy with a simple software patch.
Host: So it extends the product lifecycle and creates more value over time. What else?
Expert: The second takeaway is that it allows you to turn your product into a platform. By building these clean adapter layers, PrintCo was eventually able to open up its software to third-party developers. They created plug-ins for custom tasks, turning the printer from a closed device into an open ecosystem. That drives immense customer loyalty and engagement.
Host: That’s a powerful shift in strategy.
Expert: It is. And the final takeaway is that this provides a strategic roadmap. For any leader looking to digitize a physical product line, this study shows that the journey must be deliberate. It has to start at the lowest level—at the intersection of hardware and firmware. If you build that foundation correctly, you unlock incredible agility and innovation potential for years to come.
Host: Fantastic insights. So, to wrap up: if you want to successfully transform a physical product, the secret isn't just adding an app. The real work is in architecting the connection from the ground up.
Host: The key steps are to first, ‘parametrize’ your hardware with firmware so it’s digitally controllable. And second, build smart ‘adapter layers’ to bridge that hardware to user-friendly software features. The business payoff is huge: flexible, future-proof products that can evolve into vibrant innovation platforms.
Host: Alex Ian Sutherland, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we uncover more actionable ideas from the world of research.
Digital Product Innovation, Firmware, Product Architecture, Layering, Embedding, 3D Printing, Case Study
Journal of the Association for Information Systems (2025)
Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms
Zhen Shao, Lin Zhang, Susan A. Brown, Jose Benitez
This study investigates how to build user trust in online healthcare mutual aid platforms that use blockchain technology. Drawing on institutional trust theory, the research examines how policy and technology assurances influence users' intentions and actual usage by conducting a two-part field survey with users of a real-world platform.
Problem
Online healthcare mutual aid platforms, which act as a form of peer-to-peer insurance, struggle with user adoption due to widespread distrust. Frequent incidents of fraud, false claims, and misappropriation of funds have created skepticism, making it a significant challenge to facilitate user trust and ensure the sustainable growth of these platforms.
Outcome
- Both strong institutional policies (policy assurance) and reliable technical features enabled by blockchain (technology assurance) significantly increase users' trust in the platform. - Higher user trust is directly linked to a greater intention to use the online healthcare mutual aid platform. - The intention to use the platform positively influences actual usage behaviors, such as the frequency and intensity of use. - Trust acts as a full mediator, meaning that the platform's assurances build trust, which in turn drives user intention and behavior.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world of digital services, how do you build user trust from the ground up? Today, we’re exploring a fascinating study that tackles this very question. Host: It’s titled, "Uncovering the Structural Assurance Mechanisms in Blockchain Technology-Enabled Online Healthcare Mutual Aid Platforms". In short, it’s about how to build user trust in new peer-to-peer insurance platforms that are using blockchain technology. Host: Here to unpack this for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: So, let’s start with the big picture. What are these online healthcare mutual aid platforms, and why is trust such a huge challenge for them? Expert: These platforms are essentially a form of peer-to-peer insurance. A group of people joins a digital pool to support each other financially if someone gets sick. It's a great concept, but it has been plagued by a massive trust issue. Host: What’s driving that distrust? Expert: The study points to frequent and highly public incidents of fraud. We’re talking about everything from people making false claims to the outright misappropriation of funds. The researchers highlight news reports where, for example, a person needed about seven thousand yuan for treatment but raised three hundred thousand on a platform and used it for personal expenses. Host: Wow, that would definitely make me hesitant to contribute. Expert: Exactly. These incidents create widespread skepticism. In fact, one report cited in the study found that over 70 percent of potential donors harbored distrust for these platforms, which is a huge barrier to adoption and growth. Host: It’s a classic problem for any new marketplace. So how did the researchers go about studying a solution? How do you scientifically measure something like trust? Expert: They took a very practical approach. They conducted a two-part field survey with over 200 actual users of a real-world platform in China called Xianghubao. In the first phase, they measured the users' perceptions of the platform's safety features and their level of trust. Expert: Then, six months later, they followed up with those same users to capture their actual usage behavior—how often they were using the platform and which features they engaged with. This allowed them to statistically connect the dots between the platform's design, the user's feeling of trust, and their real-world actions. Host: A two-part study sounds really thorough. So, Alex, what were the key findings? What actually works to build that trust? Expert: The study found two critical components. The first is what they call 'policy assurance'. These are the institutional structures—clear rules, contractual guarantees, and transparent legal policies that show the platform is well-governed and accountable. Expert: The second component is 'technology assurance'. In this case, that means the specific, reliable features enabled by blockchain. Host: So it's not just about having the latest tech. The company's old-fashioned rules and promises matter just as much. Expert: Precisely. And both of them were shown to significantly increase users' trust in the platform. That higher trust, in turn, was directly linked to a greater intention to use the platform, which then translated into actual, sustained usage. Host: The summary of the study mentions that trust acts as a 'full mediator'. What does that mean in simple terms for a business leader? Expert: It’s a really important point. It means that having great policies and secure technology isn't enough on its own. Those features don't directly make people use your service. Their primary function is to build trust. It is that feeling of trust that then drives user behavior. So, for any business, the goal of your safety mechanisms should be to make the user *feel* secure, because that feeling is what actually powers the business. Host: That’s a powerful insight. Trust is the engine, not just a nice-to-have feature. So, let’s get to the bottom line. What are the key takeaways for businesses, even those outside of healthcare or blockchain? Expert: The first takeaway is that you need a two-pronged approach. You can't just rely on cutting-edge technology, and you can't just rely on a good rulebook. The study shows you need both strong policy assurances and strong technology assurances working together. Host: And how do you make those assurances effective? Expert: That’s the second key takeaway: make them tangible. For policy assurance, this means establishing and clearly communicating your auditing rules, your feedback policies, and any user protections. Don't hide them in the fine print. Expert: For technology assurance, it means giving users a way to see the security in action. The platform they studied, Xianghubao, uses blockchain to let users view a tamper-proof record of how funds are used for every single claim. This transparency moves the platform from saying "trust us" to showing "here is the proof." Host: So, the lesson for any business launching a new digital service is to actively demonstrate both your operational integrity through clear policies and your technical security through features the user can actually see and understand. Expert: Exactly that. It’s about building a system where trust is an outcome of transparent design, not a leap of faith. Host: This is incredibly relevant for so many emerging business models. To recap: building user trust in a skeptical environment requires a combination of strong, clear policies and transparent, verifiable technology. And crucially, these assurances work by building user trust, which is the real engine for adoption and usage. Host: Alex, thank you for breaking down this complex topic into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thanks to our audience for tuning in. Join us next time on A.I.S. Insights.
Journal of the Association for Information Systems (2025)
Responsible AI Design: The Authenticity, Control, Transparency Theory
Andrea Rivera, Kaveh Abhari, Bo Xiao
This study explores how to design Artificial Intelligence (AI) responsibly from the perspective of AI designers. Using a grounded theory approach based on interviews with industry professionals, the paper develops the Authenticity, Control, Transparency (ACT) theory as a new framework for creating ethical AI.
Problem
Current guidelines for responsible AI are fragmented and lack a cohesive theory to guide practice, leading to inconsistent outcomes. Existing research often focuses narrowly on specific attributes like algorithms or harm minimization, overlooking the broader design decisions that shape an AI's behavior from its inception.
Outcome
- The study introduces the Authenticity, Control, and Transparency (ACT) theory as a practical framework for responsible AI design. - It identifies three core mechanisms—authenticity, control, and transparency—that translate ethical design decisions into responsible AI behavior. - These mechanisms are applied across three key design domains: the AI's architecture, its algorithms, and its functional affordances (capabilities offered to users). - The theory shifts the focus from merely minimizing harm to also maximizing the benefits of AI, providing a more balanced approach to ethical design.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a foundational topic: how to build Artificial Intelligence responsibly from the ground up. We'll be discussing a fascinating study from the Journal of the Association for Information Systems titled, "Responsible AI Design: The Authenticity, Control, Transparency Theory".
Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. We hear a lot about AI ethics and responsible AI, but this study suggests there’s a fundamental problem with how we're approaching it. What's the issue?
Expert: The core problem is fragmentation. Right now, companies get bombarded with dozens of different ethical guidelines, principles, and checklists. It’s like having a hundred different recipes for the same dish, all with slightly different ingredients. It leads to confusion and inconsistent results.
Host: And the study argues this misses the point somehow?
Expert: Exactly. It points out three major misconceptions. First, we treat responsibility like a feature to be checked off a list, rather than a behavior designed into the AI's core. Second, we focus almost exclusively on the algorithm, ignoring the AI’s overall architecture and the actual capabilities it offers to users.
Host: And the third misconception?
Expert: It's that we're obsessed with only minimizing harm. That’s crucial, of course, but it's only half the story. True responsible design should also focus on maximizing the benefits and the value the AI provides.
Host: So how did the researchers get past these misconceptions to find a solution? What was their approach?
Expert: They went directly to the source. They conducted in-depth interviews with 24 professional AI designers—the people actually in the trenches, making the decisions that shape these systems every day. By listening to them, they built a theory from the ground up based on real-world practice, not just abstract ideals.
Host: That sounds incredibly practical. What were the key findings that emerged from those conversations?
Expert: The main outcome is a new framework called the Authenticity, Control, and Transparency theory—or ACT theory for short. It proposes that for an AI to behave responsibly, its design must be guided by these three core mechanisms.
Host: Okay, let's break those down. What do they mean by Authenticity?
Expert: Authenticity means the AI does what it claims to do, reliably and effectively. It’s about ensuring the AI's performance aligns with its intended purpose and ethical values. It has to be dependable and provide genuine utility.
Host: That makes sense. What about Control?
Expert: Control is about empowering users. It means giving people meaningful agency over the AI's behavior and its outputs. This could be anything from customization options to clear data privacy controls, ensuring the user is in the driver's seat.
Host: And the final piece, Transparency?
Expert: Transparency is about making the AI's operations clear and understandable. It’s not just about seeing the code, but understanding how the AI works, why it makes certain decisions, and what its limitations are. It’s the foundation for accountability and trust.
Host: So the ACT theory combines Authenticity, Control, and Transparency. Alex, this is the most important question for our listeners: why does this matter for business? What are the practical takeaways?
Expert: For business leaders, the ACT theory provides a clear, actionable roadmap. It moves responsible AI out of a siloed ethics committee and embeds it directly into the product design lifecycle. It gives your design, engineering, and product teams a shared language to build better AI.
Host: So it's about making responsibility part of the process, not an afterthought?
Expert: Precisely. And that has huge business implications. An AI that is authentic, controllable, and transparent is an AI that customers will trust. And in the digital economy, trust is everything. It drives adoption, enhances brand reputation, and ultimately, creates more valuable and successful products.
Host: It sounds like it’s a framework for building a competitive advantage.
Expert: It absolutely is. By adopting a framework like ACT, businesses aren't just managing risk or preparing for future regulation; they are actively designing better, safer, and more user-centric products that can win in the market.
Host: A powerful insight. To summarize for our listeners: the current approach to responsible AI is often fragmented. This study offers a solution with the ACT theory—a practical framework built on Authenticity, Control, and Transparency that can help businesses build AI that is not only ethical but more trustworthy and valuable.
Host: Alex Ian Sutherland, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights. We'll see you next time.
Responsible AI, AI Ethics, AI Design, Authenticity, Transparency, Control, Algorithmic Accountability
Journal of the Association for Information Systems (2025)
Continuous Contracting in Software Outsourcing: Towards A Configurational Theory
Thomas Huber, Kalle Lyytinen
This study investigates how governance configurations are formed, evolve, and influence outcomes in software outsourcing projects that use continuous contracting. Through a longitudinal, multimethod analysis of 33 governance episodes across three projects, the research identifies how different combinations of contract design and project control achieve alignment and flexibility. The methodology combines thematic analysis with crisp-set qualitative comparative analysis (csQCA) to develop a new theory.
Problem
Contemporary software outsourcing increasingly relies on continuous contracting, where an initial umbrella agreement is followed by periodic contracts. However, there is a significant gap in understanding how managers should combine contract design and project controls to balance the competing needs for project alignment and operational flexibility, and how these choices evolve to impact overall project performance.
Outcome
- Identified eight distinct governance configurations, each consistently linked to specific outcomes of alignment and flexibility. - Found that project outcomes depend on how governance elements interact within a configuration, either by substituting for each other or compensating for each other's limitations. - Showed that as trust and knowledge accumulate, managers' governance strategies evolve from simple configurations (achieving either alignment or flexibility) to more sophisticated ones that achieve both simultaneously. - Concluded that by deliberately evolving governance configurations, managers can better steer projects and enhance overall performance.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In today's complex business world, outsourcing software development is common, but making it work is anything but simple. Today, we're diving into a fascinating study titled "Continuous Contracting in Software Outsourcing: Towards A Configurational Theory."
Host: It explores how companies can better manage these relationships, not through a single, rigid contract, but as an evolving partnership. With me to break it all down is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So, Alex, let's start with the big picture. When a company outsources a major software project, what's the core problem this research is trying to solve?
Expert: The central problem is a classic business tension: you need to ensure the project stays on track and meets its goals, which we call 'alignment'. But you also need to be able to adapt to changes and new ideas, which is 'flexibility'.
Host: And traditional contracts aren't great at handling both, are they?
Expert: Exactly. A traditional, iron-clad contract might be good for alignment, but it's too rigid. So, many companies now use 'continuous contracting'—an initial umbrella agreement followed by smaller, periodic contracts or statements of work. The challenge is, there's been very little guidance on how managers should actually combine the contract details with day-to-day project management to get that balance right.
Host: It sounds like a real juggling act. So how did the researchers get inside these complex relationships to figure out what works?
Expert: They conducted a really deep, multi-year study of three large software projects. They analyzed 33 different contracting periods, or 'episodes', looking at all the contractual documents and project plans. Crucially, they also conducted in-depth interviews with managers from both the client and the vendor side to understand their thinking and the results of their decisions.
Host: So they weren't just looking at the documents; they were looking at the entire process in action. What were the key findings?
Expert: They had a few big 'aha' moments. First, there is no single 'best' way to manage an outsourcing contract. Instead, they identified eight distinct recipes, or what they call 'governance configurations'. Each one is a specific mix of contract design and project controls that consistently leads to a predictable outcome.
Host: And these outcomes relate back to that tension you mentioned between alignment and flexibility?
Expert: Precisely. Some of these recipes were great at achieving alignment, keeping the project strictly on task. Others were designed to maximize flexibility, allowing for innovation. But the most interesting finding was how the different elements within a recipe work together.
Host: What do you mean by that?
Expert: Some elements can substitute for each other. For instance, if your contract isn't very detailed, you can substitute for that with very close, hands-on project monitoring. Other elements compensate for each other's weaknesses. A detailed contract might provide alignment, but you can compensate for its rigidity by including a 'task buffer' that gives the vendor freedom to solve unforeseen problems.
Host: That makes sense. It’s about the combination, not just the individual parts. Was there another key finding?
Expert: Yes, and it’s a crucial one. These configurations evolve over time. The study showed that as trust and project-specific knowledge build between the client and the vendor, their approach matures. They might start with simple setups that achieve only alignment *or* flexibility, but they learn to use more sophisticated recipes that achieve both at the same time.
Host: This is the part our listeners are waiting for. What does this all mean for a business leader managing an outsourcing partner?
Expert: The most important takeaway is to stop seeing contracts as static legal documents that you file away. You need to see contracting as an active, dynamic management tool. It’s a set of levers you can pull throughout the project.
Host: So managers need to be more strategic and deliberate.
Expert: Exactly. Be deliberate about the recipe you're using. Ask yourself: in this phase of the project, do I need to prioritize alignment, flexibility, or both? Then, choose the right combination of tools—like how specific the contract is, whether you grant the vendor autonomy on certain tasks, and how you formalize changes.
Host: And what about the role of trust that you mentioned?
Expert: It's fundamental. The study clearly shows that investing time and effort in building a trusting relationship and shared knowledge pays dividends. It literally expands your management toolkit, allowing you to use those more advanced, high-performing configurations that deliver better results in the long run.
Host: So, to summarize: managers should view software outsourcing contracts not as a single event, but as a continuous management process. Success comes from deliberately choosing the right recipe of contract and control elements for the job. And by investing in the relationship, you can evolve that recipe over time to achieve both tight alignment and crucial flexibility, driving superior project performance.
Host: Alex Ian Sutherland, thank you for bringing this research to life for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights, powered by Living Knowledge.
Journal of the Association for Information Systems (2025)
What Is Augmented? A Metanarrative Review of AI-Based Augmentation
Inès Baer, Lauren Waardenburg, Marleen Huysman
This paper conducts a comprehensive literature review across five research disciplines to clarify the concept of AI-based augmentation. Using a metanarrative review method, the study identifies and analyzes four distinct targets of what AI augments: the body, cognition, work, and performance. Based on this framework, the authors propose an agenda for future research in the field of Information Systems.
Problem
In both academic and public discussions, Artificial Intelligence is often described as a tool for 'augmentation' that helps humans rather than replacing them. However, this popular term lacks a clear, agreed-upon definition, and there is little discussion about what specific aspects of human activity are the targets of this augmentation. This research addresses the fundamental question: 'What is augmented by AI?'
Outcome
- The study identified four distinct metanarratives, or targets, of AI-based augmentation: the body (enhancing physical and sensory functions), cognition (improving decision-making and knowledge), work (creating new employment opportunities and improving work practices), and performance (increasing productivity and innovation). - Each augmentation target is underpinned by a unique human-AI configuration, ranging from human-AI symbiosis for body augmentation to mutual learning loops for cognitive augmentation. - The paper reveals tensions and counternarratives for each target, showing that augmentation is not purely positive; for example, it can lead to over-dependence on AI, deskilling, or a loss of human agency. - The four augmentation targets are interconnected, creating potential conflicts (e.g., prioritizing performance over meaningful work) or dependencies (e.g., cognitive augmentation relies on augmenting bodily senses).
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I'm your host, Anna Ivy Summers. Host: We hear it all the time: AI isn't here to replace us, but to *augment* us. It's a reassuring idea, but what does it actually mean? Host: Today, we’re diving into a fascinating new study from the Journal of the Association for Information Systems. It's titled, "What Is Augmented? A Metanarrative Review of AI-Based Augmentation." Host: The study looks across multiple research fields to clarify this very concept. It identifies four distinct things that AI can augment: our bodies, our cognition, our work, and our performance. Host: To help us unpack this is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So Alex, let's start with the big problem. Why did we need a study to define a word we all think we understand? Expert: That's the core of the issue. In business, 'augmentation' has become a popular, optimistic buzzword. It's used to ease fears about automation and job loss. Expert: But the study points out that the term is incredibly vague. When a company says it's using AI for augmentation, it's not clear what they're actually trying to improve. Expert: The researchers ask a simple but powerful question that's often overlooked: if we're making something 'more,' what is that something? More skills? More productivity? This lack of clarity is a huge barrier to forming an effective AI strategy. Host: So the first step is to get specific. How did the study go about creating a clearer picture? Expert: They took a really interesting approach. Instead of just looking at one field, they analyzed research from five different disciplines, including computer science, management, and economics. Expert: They were looking for the big, overarching storylines—or metanarratives—that different experts tell about AI augmentation. This allowed them to cut through the jargon and identify the fundamental targets of what's being augmented. Host: And that led them to the key findings. What were these big storylines they uncovered? Expert: They distilled it all down to four clear targets. The first is augmenting the **body**. This is about enhancing our physical and sensory functions—think of a surgeon using a robotic arm for greater precision or an engineer using AR glasses to see schematics overlaid on real-world equipment. Host: Okay, so a very direct, physical enhancement. What’s the second? Expert: The second is augmenting **cognition**. This is about improving our thinking and decision-making. For example, AI can help financial analysts identify subtle market patterns or assist doctors in making a faster, more accurate diagnosis. It's about enhancing our mental capabilities. Host: That makes sense. And the third? Expert: Augmenting **work**. This focuses on changing the nature of jobs and tasks. A classic example is an AI chatbot handling routine customer queries. This doesn't replace the human agent; it frees them up to handle more complex, emotionally nuanced problems, making their work potentially more fulfilling. Host: And the final target? Expert: That would be augmenting **performance**. This is the one many businesses default to, and it's all about increasing productivity, efficiency, and innovation at a systemic level. Think of AI optimizing a global supply chain or accelerating the R&D process for a new product. Host: That's a fantastic framework. But the study also found that augmentation isn't a purely positive story, is it? Expert: Exactly. This is a critical insight. For each of those four targets, the study identified tensions or counternarratives. Expert: For example, augmenting cognition can lead to over-dependence and deskilling if we stop thinking for ourselves. Augmenting work can backfire if AI dictates every action, turning an employee into someone who just follows a script, which reduces their agency and job satisfaction. Host: This brings us to the most important question, Alex. Why does this matter for business leaders? How can they use this framework? Expert: It matters immensely. First, it forces strategic clarity. A leader can now move beyond saying "we're using AI to augment our people." They should ask, "Which of the four targets are we aiming for?" Expert: Is the goal to augment the physical abilities of our warehouse team? That's a **body** strategy. Is it to improve the decisions of our strategy team? That's a **cognition** strategy. Being specific is the first step. Host: And what comes after getting specific? Expert: Understanding the trade-offs. The study shows these targets can be in conflict. A strategy that relentlessly pursues **performance** by automating everything possible might directly undermine a goal to augment **work** by making jobs more meaningful. Leaders need to see this tension and make conscious choices about their priorities. Host: So it’s about choosing a target and understanding its implications. Expert: Yes, and finally, it's about designing the right kind of human-AI partnership. Augmenting the body implies a tight, almost symbiotic relationship. Augmenting cognition requires creating mutual learning loops, where humans train the AI and the AI provides insights that train the humans. It's not one-size-fits-all. Host: So to sum up, it seems the key message for business leaders is to move beyond the buzzword. Host: This study gives us a powerful framework for doing just that. By identifying whether you are trying to augment the body, cognition, work, or performance, you can build a much smarter, more intentional AI strategy. Host: You can anticipate the risks, navigate the trade-offs, and ultimately create a more effective collaboration between people and technology. Host: Alex, thank you for making that so clear for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Journal of the Association for Information Systems (2025)
What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace
Sebastian Schuetz, Heiko Gewald, Allen Johnston, Jason Bennett Thatcher
This study investigates the work-related goals that motivate employees' information systems security behaviors. It employs a mixed-methods approach, first using qualitative interviews to identify key employee goals and then using a large-scale quantitative survey to evaluate their importance in predicting security actions.
Problem
Prior research on information security behavior often relies on general theories from criminology or public health, which do not fully capture the specific goals employees have in a workplace context. This creates a gap in understanding the primary motivations for why employees choose to follow or ignore security protocols during their daily work.
Outcome
- Employees' security behaviors are primarily driven by the goals of achieving good work performance and avoiding blame for security incidents. - Career advancement acts as a higher-order goal, giving purpose to security behaviors by motivating the pursuit of subgoals like work performance and blame avoidance. - The belief that security behaviors help meet a supervisor's performance expectations (work performance alignment) is the single most important predictor of those behaviors. - Organizational citizenship (the desire to be a 'good employee') was not a significant predictor of security behavior when other goals were considered. - A strong security culture encourages secure behaviors by strengthening the link between these behaviors and the goals of work performance and blame avoidance.
Host: Hello and welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today we’re diving into a question that keeps executives up at night: Why do employees click on that phishing link or ignore security warnings? We’re looking at a study titled, "What Goals Drive Employees' Information Systems Security Behaviors? A Mixed Methods Study of Employees' Goals in the Workplace."
Host: It investigates the work-related goals that truly motivate employees to act securely. And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Great to be here, Anna.
Host: Alex, companies invest fortunes in firewalls and security software, but we constantly hear that the ‘human factor’ is the weakest link. What’s the big problem this study wanted to solve?
Expert: The core problem is that for decades, we’ve been trying to understand employee security behavior using the wrong lens. Much of the previous research was based on general theories from fields like public health or even criminology.
Host: Criminology? How does that apply to an accountant in an office?
Expert: Exactly. Those theories focus on goals like avoiding punishment or avoiding physical harm. But an employee’s daily life isn’t about that. They're trying to meet deadlines, impress their boss, and get their work done. This study argues that we’ve been missing the actual, on-the-ground goals that drive people in a workplace context.
Host: So how did the researchers get closer to those real-world goals? What was their approach?
Expert: They used a really smart two-part method. First, instead of starting with a theory, they started with the employees. They conducted in-depth interviews across various industries to simply ask people about their career goals and how security fits in.
Host: So they were listening first, not testing a hypothesis.
Expert: Precisely. Then, they took all the goals that emerged from those conversations—things like performance, career advancement, and avoiding blame—and built a large-scale survey. They gave this to over 1,200 employees to measure which of those goals were the most powerful predictors of secure behaviors.
Host: A great way to ground the research in reality. So, after speaking to all these people, what did they find? What really makes an employee follow the rules?
Expert: The results were incredibly clear, and the number one driver was not what you might expect. It’s the goal of achieving good work performance.
Host: Not fear of being fired or protecting the company, but simply doing a good job?
Expert: Yes. The belief that secure behaviors help an employee meet their supervisor's performance expectations was the single most important factor. It boils down to a simple calculation in the employee's mind: "Is doing this security task part of what it means to be good at my job?"
Host: That’s a powerful insight. What was the second most important driver?
Expert: The second was avoiding blame. Employees are motivated to follow security rules because they don’t want to be singled out as the person responsible for a security incident, knowing it could have a negative impact on their reputation and career.
Host: So what about appealing to an employee's sense of loyalty or being a 'good corporate citizen'?
Expert: That’s one of the most surprising findings. The desire to be a ‘good employee’ for the company's sake, what the study calls organizational citizenship, was not a significant factor when you accounted for the other goals. It seems that abstract loyalty doesn't drive day-to-day security actions nearly as much as personal, tangible goals do.
Host: This brings us to the most important section for our audience. Alex, what does this all mean for business leaders? How can they use these insights?
Expert: It means we need to fundamentally shift our security messaging. First, managers must explicitly link security to job performance. Make it part of the conversation during performance reviews. Frame it as a core competency, not an IT chore. Success in your role includes being secure with company data.
Host: So it moves from the IT department's problem to a personal performance metric.
Expert: Exactly. Second, leverage the power of blame avoidance, but focus it on career impact. The message isn't just "you'll get in trouble," but "a preventable security incident can be a major roadblock to the promotion you're working toward." It connects security directly to their career advancement goals.
Host: And the third takeaway?
Expert: It's all held together by building a strong security culture. The study found that a good culture is what strengthens the connection between security and the goals of performance and blame avoidance. When being secure is just 'how we do things here,' it becomes a natural part of performing well and protecting one's career.
Host: So, if I can summarize: to really improve security, businesses need to stop relying on generic warnings and start connecting secure behaviors directly to what employees value most: succeeding in their job, protecting their reputation, and advancing their career.
Expert: You've got it. It’s about making security personal to their success.
Host: Fantastic insights, Alex. Thank you for making this so clear and actionable for our listeners.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping the future of business.
Security Behaviors, Goal Systems Theory (GST), Work Performance, Blame Avoidance, Organizational Citizenship, Career Advancement
Journal of the Association for Information Systems (2025)
Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures
Egil Øvrelid, Bendik Bygstad, Ole Hanseth
This study examines how public and professional discussions, known as discourses, shape major changes in large-scale digital systems like national e-health infrastructures. Using an 18-year in-depth case study of Norway's e-health development, the research analyzes how high-level strategic trends interact with on-the-ground practical challenges to drive fundamental shifts in technology programs.
Problem
Implementing complex digital infrastructures like national e-health systems is notoriously difficult, and leaders often struggle to understand why some initiatives succeed while others fail. Previous research focused heavily on the role of powerful individuals or groups, paying less attention to the underlying, systemic influence of how different conversations about technology and strategy converge over time. This gap makes it difficult for policymakers to make sensible, long-term decisions and navigate the evolution of these critical systems.
Outcome
- Major shifts in large digital infrastructure programs occur when high-level strategic discussions (macrodiscourses) and practical, operational-level discussions (microdiscourses) align and converge. - This convergence happens through three distinct processes: 'connection' (a shared recognition of a problem), 'matching' (evaluating potential solutions that fit both high-level goals and practical needs), and 'merging' (making a decision and reconciling the different perspectives). - The result of this convergence is a new "discursive formation"—a powerful, shared understanding that aligns stakeholders, technology, and strategy, effectively launching a new program and direction. - Policymakers and managers can use this framework to better analyze the alignment between broad technological trends and their organization's specific, internal needs, leading to more informed and realistic strategic planning.
Host: Welcome to A.I.S. Insights, the podcast where we connect big ideas with business reality, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today we're diving into a fascinating new study titled "Making Sense of Discursive Formations and Program Shifts in Large-Scale Digital Infrastructures." In short, it explores how the conversations we have—both in the boardroom and on the front lines—end up shaping massive technological changes, like a national e-health system.
Host: To help us break it down, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: It's great to be here, Anna.
Host: So, Alex, let's start with the big picture. We've all seen headlines about huge, expensive government or corporate IT projects that go off the rails. What's the core problem this study is trying to solve?
Expert: The core problem is exactly that. Leaders of these massive digital infrastructure projects, whether in healthcare, finance, or logistics, often struggle to understand why some initiatives succeed and others fail spectacularly. For a long time, the thinking was that it all came down to a few powerful decision-makers.
Host: But this study suggests it's more complicated than that.
Expert: Exactly. It argues that we've been paying too little attention to the power of conversations themselves—and how different streams of discussion come together over time to create real, systemic change. It’s not just about what one CEO decides; it’s about the alignment of many different voices.
Host: How did the researchers even begin to study something as broad as "conversations"? What was their approach?
Expert: They took a very deep, long-term view. The research is built on an incredible 18-year case study of Norway's national e-health infrastructure development. They analyzed everything from high-level policy documents and media reports to interviews with the clinicians and IT staff actually using the systems day-to-day.
Host: Eighteen years. That's some serious dedication. After all that time, what did they find is the secret ingredient for making these major program shifts happen successfully?
Expert: The key finding is a concept they call "discourse convergence." It sounds academic, but the idea is simple. A major shift only happens when the high-level, strategic conversations, which they call 'macrodiscourses', finally align with the practical, on-the-ground conversations, the 'microdiscourses'.
Host: Can you give us an example of those two types of discourse?
Expert: Absolutely. A 'macrodiscourse' is the big-picture buzz. Think of consultants and politicians talking about exciting new trends like 'Service-Oriented Architecture' or 'Digital Ecosystems'. A 'microdiscourse', on the other hand, is the reality on the ground. It's the nurse complaining that the systems are so fragmented she has to tell a patient's history over and over again because the data doesn't connect.
Host: And a major program shift occurs when those two worlds meet?
Expert: Precisely. The study found this happens through a three-step process. First is 'connection', where everyone—from the C-suite to the front line—agrees that there's a significant problem. Second is 'matching', where potential solutions are evaluated to see if they fit both the high-level strategic goals and the practical, day-to-day needs.
Host: And the final step?
Expert: The final step is 'merging'. This is where a decision is made, and a new, shared understanding is formed that reconciles those different perspectives. That new shared understanding is powerful—it aligns the stakeholders, the technology, and the strategy, effectively launching a whole new direction for the program.
Host: This is the critical question, then. What does this mean for business leaders listening right now? How can they apply this framework to their own digital transformation projects?
Expert: This is where it gets really practical. The biggest takeaway is that leaders must listen to both conversations. It’s easy to get swept up in the latest tech trend—the macrodiscourse. But if that new strategy doesn't solve a real, tangible pain point for your employees or customers—the microdiscourse—it's destined to fail.
Host: So it's about bridging the gap between the executive suite and the people actually doing the work.
Expert: Yes, and leaders need to be proactive about it. Don't just wait for these conversations to align by chance. Create forums where your big-picture strategists and your on-the-ground operators can find that 'match' together. Use this as a diagnostic tool. Ask yourself: is the grand vision for our new platform completely disconnected from the daily struggles our teams are facing with the old one? If the answer is yes, you have a problem.
Host: A brilliant way to pressure-test a strategy. So, to sum up, these huge technology shifts aren't just top-down mandates. They succeed when high-level strategy converges with on-the-ground reality, through a process of connecting on a problem, matching a viable solution, and merging toward a new, shared goal.
Expert: That's the perfect summary, Anna.
Host: Alex Ian Sutherland, thank you so much for translating this complex research into such clear, actionable insights.
Expert: My pleasure.
Host: And thanks to all of you for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another big idea for your business.
Discursive Formations, Discourse Convergence, Large-Scale Digital Infrastructures, E-Health Programs, Program Shifts, Sociotechnical Systems, IT Strategy
Journal of the Association for Information Systems (2025)
Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare
Pascal Fechner, Luis Lämmermann, Jannik Lockl, Maximilian Röglinger, Nils Urbach
This study investigates how autonomous information systems (agentic IS artifacts) are transforming the traditional two-way relationship between patients and doctors into a three-way, or triadic, relationship. Using an in-depth case study of an AI-powered health companion for managing neurogenic lower urinary tract dysfunction, the paper analyzes the new dynamics, roles, and interactions that emerge when an intelligent technology becomes an active participant in healthcare delivery.
Problem
With the rise of artificial intelligence in medicine, autonomous systems are no longer just passive tools but active agents in patient care. This shift challenges the conventional patient-doctor dynamic, yet existing theories are ill-equipped to explain the complexities of this new three-part relationship. This research addresses the gap in understanding how these AI agents redefine roles, interactions, and potential conflicts in patient-centric healthcare.
Outcome
- The introduction of an AI agent transforms the dyadic patient-doctor relationship into a triadic one, often with the AI acting as a central intermediary. - The AI's capabilities create 'attribute interference,' where responsibilities and knowledge overlap between the patient, doctor, and AI, introducing new complexities. - New 'triadic delegation choices' emerge, allowing tasks to be delegated to the doctor, the AI, or both, based on factors like task complexity and emotional context. - The study identifies novel conflicts arising from this triad, including human concerns over losing control (autonomy conflicts), new information imbalances, and the blurring of traditional medical roles.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled, "Toward Triadic Delegation: How Agentic IS Artifacts Affect the Patient-Doctor Relationship in Healthcare." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, this study sounds quite specific, but it has broad implications. In a nutshell, what is it about? Expert: It’s about how smart, autonomous AI systems are fundamentally changing the traditional two-way relationship between a professional and their client—in this case, a doctor and a patient—by turning it into a three-way relationship. Host: A three-way relationship? You mean Patient, Doctor, and... AI? Expert: Exactly. The AI is no longer just a passive tool; it’s an active participant, an agent, in the process. This study looks at the new dynamics, roles, and interactions that emerge from this triad. Host: That brings us to the big problem this research is tackling. Why is this shift from a two-way to a three-way relationship such a big deal? Expert: Well, the classic patient-doctor dynamic is built on direct communication and trust. But as AI becomes more capable, it starts taking on tasks, making suggestions, and even acting on its own. Host: It's doing more than just showing data on a screen. Expert: Precisely. It's becoming an agent. The problem is, our existing models for how we work and interact don't account for this third, non-human agent in the room. This creates a gap in understanding how roles are redefined and where new conflicts might arise. Host: How did the researchers actually study this? What was their approach? Expert: They conducted a very detailed, in-depth case study. They focused on a specific piece of technology: an AI-powered health companion designed to help patients manage a complex bladder condition. Host: So, a real-world application. Expert: Yes. It involved a wearable sensor and a smartphone app that monitors the patient's condition and provides real-time guidance. The researchers closely observed the interactions between patients, their doctors, and this new AI agent to see how the relationship changed over time. Host: Let’s get into those changes. What were the key findings from the study? Expert: The first major finding is that the AI almost always becomes a central intermediary. Communication that was once directly between the patient and doctor now often flows through the AI. Host: So the AI is like a new go-between? Expert: In many ways, yes. The second finding, which is really interesting, is something they call 'attribute interference'. Host: That sounds a bit technical. What does it mean for us? Expert: It just means that the responsibilities and even the knowledge start to overlap. For instance, both the doctor and the AI can analyze patient data to spot a potential infection. This creates confusion: Who is responsible? Who should the patient listen to? Host: I can see how that would get complicated. What else did they find? Expert: They found that new 'triadic delegation choices' emerge. Patients and doctors now have to decide which tasks to give to the human and which to the AI. Host: Can you give an example? Expert: Absolutely. A routine task, like logging data 24/7, is perfect for the AI. But delivering a difficult diagnosis—a task with a high emotional context—is still delegated to the doctor. The choice depends on the task's complexity and emotional weight. Host: And I imagine this new setup isn't without its challenges. Did the study identify any new conflicts? Expert: It did. The most common were 'autonomy conflicts'—basically, a fear from both patients and doctors of losing control to the AI. There were also new information imbalances and a blurring of the lines around traditional medical roles. Host: This is the crucial part for our listeners, Alex. Why does this matter for business leaders, even those outside of healthcare? Expert: Because this isn't just a healthcare phenomenon. Anywhere you introduce an advanced AI to mediate between your employees and your customers, or even between different teams, you are creating this same triadic relationship. Host: So a customer service chatbot that works with both a customer and a human agent would be an example. Expert: A perfect example. The key business takeaway is that you can't design these systems as simple tools. You have to design them as teammates. This means clearly defining the AI's role, its responsibilities, and its boundaries. Host: It's about proactive management of that new relationship. Expert: Exactly. Businesses need to anticipate 'attribute interference'. If an AI sales assistant can draft proposals, you need to clarify how that affects the role of your human sales team. Who has the final say? How do they collaborate? Host: So clarity is key. Expert: Clarity and trust. The study showed that conflicts arise from ambiguity. For businesses, this means being transparent about what the AI does and how it makes decisions. You have to build trust not just between the human and the AI, but between all three agents in the new triad. Host: Fascinating stuff. So, to summarize, as AI becomes more autonomous, it’s not just a tool, but a third agent in professional relationships. Expert: That's the big idea. It turns a simple line into a triangle, creating new pathways for communication and delegation, but also new potential points of conflict. Host: And for businesses, the challenge is to manage that triangle by designing for collaboration, clarifying roles, and intentionally building trust between all parties—human and machine. Host: Alex, thank you so much for breaking this down for us. This gives us a lot to think about. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the future of business and technology.
Agentic IS Artifacts, Delegation, Patient-Doctor Relationship, Personalized Healthcare, Triadic Delegation, Healthcare AI
Journal of the Association for Information Systems (2025)
Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective
Adrian Yeow, Wee-Kiat Lim, Samer Faraj
This paper investigates the complexities of developing large-scale digital infrastructure through a case study of an electronic medical record (EMR) system implementation in a U.S. hospital. It introduces and analyzes the concept of 'digital infrastructuring work'—the combination of technical, social, and symbolic actions that organizational actors perform. The study provides a framework for understanding the tensions and actions that shape the outcomes of such projects.
Problem
Implementing new digital infrastructures in large organizations is challenging because it often disrupts established routines and power structures, leading to resistance and project stalls. Existing research frequently overlooks how the combination of technical tasks, social negotiations, and symbolic arguments by different groups influences the success or failure of these projects. This study addresses this gap by providing a more holistic view of the work involved in digital infrastructure development from an institutional perspective.
Outcome
- The study introduces 'digital infrastructuring work' to explain how actors shape digital infrastructure development, categorizing it into three forms: digital object work (technical tasks), DI relational work (social interactions), and DI symbolic work (discursive actions). - It finds that project stakeholders strategically combine these forms of work to either support change or maintain existing systems, highlighting the contested nature of infrastructure projects. - The success or failure of a digital infrastructure project is shown to depend on how effectively different groups navigate the tensions between change and stability by skillfully blending technical, relational, and symbolic efforts. - The paper demonstrates that technical work itself carries institutional significance and is not merely a neutral backdrop for social interactions, but a key site of contestation.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the often-messy reality of large-scale technology projects. With me is our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: We're discussing a study titled "Digital Infrastructure Development Through Digital Infrastructuring Work: An Institutional Work Perspective". In short, it looks at the complexities of implementing something like a new enterprise-wide software system, using a case study of an electronic medical record system in a hospital. Expert: Exactly. It provides a fascinating framework for understanding all the moving parts—technical, social, and even political—that can make or break these massive projects. Host: Let’s start with the big problem. Businesses spend millions on new digital infrastructure, but so many of these projects stall or fail. Why is that? Expert: It’s because these new systems don’t just replace old software; they disrupt routines, workflows, and even power structures that have been in place for years. People and departments often resist, but that resistance isn’t always obvious. Host: The study looked at a real-world example of this, right? Expert: It did. The researchers followed a large U.S. hospital trying to implement a new, centralized electronic medical record system. The goal was to unify everything. Expert: But they immediately ran into a wall. The hospital was really two powerful groups: the central hospital administration and the semi-independent School of Medicine, which had its own way of doing things, its own processes, and its own IT systems. Host: So it was a turf war disguised as a tech project. Expert: Precisely. The new system threatened the autonomy and revenue of the medical school's clinics, and they pushed back hard. The project ground to a halt not because the technology was bad, but because of these deep-seated institutional tensions. Host: So how did the researchers get such a detailed view of this conflict? What was their approach? Expert: They essentially embedded themselves in the project for several years. They conducted over 50 interviews with everyone from senior management to the IT staff on the ground. They sat in on project meetings, observed the teams at work, and analyzed project documents. It was a true behind-the-scenes look at what was happening. Host: And what were the key findings from that deep dive? Expert: The central finding is a concept the study calls ‘digital infrastructuring work’. It’s a way of saying that to get a project like this done, you need to perform three different kinds of work at the same time. Host: Okay, break those down for us. What’s the first one? Expert: First is ‘digital object work’. This is what we traditionally think of as IT work: reprogramming databases, coding new interfaces, and connecting different systems. It's the hands-on technical stuff. Host: Makes sense. What's the second? Expert: The second is ‘relational work’. This is all about the social side: negotiating with other teams, building coalitions, escalating issues to senior leaders, or even strategically avoiding meetings and delaying tasks to slow things down. Host: And the third? Expert: The third is ‘symbolic work’. This is the battle of narratives. It’s the arguments and justifications people use. For example, one team might argue for change by highlighting future efficiencies, while another team resists by claiming the new system is incompatible with their "unique and essential" way of working. Host: So the study found that these projects are a constant struggle between groups using all three of these tactics? Expert: Exactly. In the hospital case, the team trying to implement the new system was doing technical work, but the opposing teams were using relational work, like delaying participation, and symbolic work, arguing their old systems were too complex to change. Expert: A fascinating example was how one team timed a major upgrade to their own legacy system to coincide with the rollout of the new one. Technically, it was just an upgrade. But strategically, it was a brilliant move that made integration almost impossible and sabotaged the project's timeline. It shows that even technical work can be a political weapon. Host: This is the crucial part for our audience, Alex. What are the key business takeaways? Why does this matter for a manager or a CEO? Expert: The biggest takeaway is that you cannot treat a digital transformation as a purely technical project. It is fundamentally a social and political one. If your plan only has technical milestones, it’s incomplete. Host: So leaders need to think beyond the technology itself? Expert: Absolutely. They need to anticipate strategic resistance. Resistance won't always be a direct 'no'. It might look like a technical hurdle, a sudden resource constraint, or an argument about security protocols. This study gives leaders a vocabulary to recognize these moves for what they are—a blend of relational and symbolic work. Host: So what’s the practical advice? Expert: You need a political plan to go with your project plan. Before you start, map out the stakeholders. Ask yourself: Who benefits from this change? And more importantly, who perceives a loss of power, autonomy, or budget? Expert: Then, you have to actively manage those three streams of work. You need your tech teams doing the digital object work, yes. But you also need leaders and managers building coalitions, negotiating, and constantly reinforcing the narrative—the symbolic work—of why this change is essential for the entire organization. Success depends on skillfully blending all three. Host: So to wrap up, a major technology project is never just about the technology. It's a complex interplay of technical tasks, social negotiations, and competing arguments. Host: And to succeed, leaders must be orchestrating all three fronts at once, anticipating resistance, and building the momentum needed to overcome it. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights, powered by Living Knowledge. Join us next time for more actionable intelligence from the world of academic research.
Digital Infrastructure Development, Institutional Work, IT Infrastructure Management, Healthcare Information Systems, Digital Objects, Case Study
Communications of the Association for Information Systems (2025)
Understanding the Ethics of Generative AI: Established and New Ethical Principles
Joakim Laine, Matti Minkkinen, Matti Mäntymäki
This study conducts a comprehensive review of academic literature to synthesize the ethical principles of generative artificial intelligence (GenAI) and large language models (LLMs). It explores how established AI ethics are presented in the context of GenAI and identifies what new ethical principles have surfaced due to the unique capabilities of this technology.
Problem
The rapid development and widespread adoption of powerful GenAI tools like ChatGPT have introduced new ethical challenges that are not fully covered by existing AI ethics frameworks. This creates a critical gap, as the specific ethical principles required for the responsible development and deployment of GenAI systems remain relatively unclear.
Outcome
- Established AI ethics principles (e.g., fairness, privacy, responsibility) are still relevant, but their importance and interpretation are shifting in the context of GenAI. - Six new ethical principles specific to GenAI are identified: respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design. - Principles such as non-maleficence, privacy, and environmental sustainability have gained heightened importance due to the general-purpose, large-scale nature of GenAI systems. - The paper proposes 'meta-principles' for managing ethical complexities, including ranking principles, mapping contradictions between them, and implementing continuous monitoring.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. Today, we're diving into the complex ethical world of Generative AI. Host: We're looking at a fascinating new study titled "Understanding the Ethics of Generative AI: Established and New Ethical Principles." Host: In short, this study explores how our established ideas about AI ethics apply to tools like ChatGPT, and what new ethical rules we need to consider because of what this powerful technology can do. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, Generative AI has exploded into our professional and personal lives. It feels like everyone is using it. What's the big problem that this rapid adoption creates, according to the study? Expert: The big problem is that we’re moving faster than our rulebook. The study highlights that the rapid development of GenAI has created new ethical challenges that our existing AI ethics frameworks just weren't built for. Host: What’s so different about Generative AI? Expert: Well, older AI ethics guidelines were often designed for systems that make specific decisions, like approving a loan or analyzing a medical scan. GenAI is fundamentally different. It's creative, it generates completely new content, and its responses are open-ended. Expert: This creates unique risks. The study notes that these models can reproduce societal biases, invent false information, or even be used to generate harmful and malicious content at an incredible scale. We're facing a critical gap between the technology's capabilities and our ethical understanding of it. Host: So we have a gap in our ethical rulebook. How did the researchers in this study go about trying to fill it? Expert: They conducted what's known as a scoping review. Essentially, they systematically analyzed a wide range of recent academic work on GenAI ethics. They identified the core principles being discussed and organized them into a clear framework. They compared this new landscape to a well-established set of AI ethics principles to see what's changed and what's entirely new. Host: That sounds very thorough. So, what were the key findings? Are the old ethical rules of AI, like fairness and transparency, now obsolete? Expert: Not at all. In fact, they're more important than ever. The study found that established principles like fairness, privacy, and responsibility are still completely relevant. However, their meaning and importance have shifted. Host: How so? Expert: Take privacy. GenAI models are trained on unimaginable amounts of data scraped from the internet. The study points out the significant risk that they could memorize and reproduce someone's private, personal information. So the stakes for privacy are much higher. Expert: The same goes for sustainability. The massive energy consumption needed to train and run these large models has made environmental impact a much more prominent ethical concern than it was with older, smaller-scale AI. Host: So the old rules apply, but with a new intensity. What about the completely new principles that emerged from the study? Expert: This is where it gets really interesting. The researchers identified six new ethical principles that are specific to Generative AI. These are respect for intellectual property, truthfulness, robustness, recognition of malicious uses, sociocultural responsibility, and human-centric design. Host: Let’s pick a couple of those. What do they mean by 'truthfulness' and 'respect for intellectual property'? Expert: 'Truthfulness' tackles the problem of AI "hallucinations"—when a model generates plausible but completely false information. Since these systems are designed to create, not to verify, ensuring their outputs are factual is a brand-new ethical challenge. Expert: 'Respect for intellectual property' addresses the massive debate around copyright. These models are trained on content created by humans—artists, writers, programmers. This raises huge questions about ownership, attribution, and fair compensation that we're only just beginning to grapple with. Host: This is crucial information, Alex. Let's bring it home for our audience. What are the key business takeaways here? Why does this matter for a CEO or a team leader? Expert: It matters immensely. The biggest takeaway is that having a generic "AI Ethics Policy" on a shelf is no longer enough. Businesses using GenAI must develop specific, actionable governance frameworks. Host: Can you give us a practical example of a risk? Expert: Certainly. If your customer service department uses a GenAI chatbot that hallucinates and gives a customer incorrect information about your product's safety or warranty, your company is responsible for that. That’s a truthfulness and accountability failure with real financial and legal consequences. Host: And the study mentioned something called 'meta-principles' to help manage this complexity. What are those? Expert: Meta-principles are guiding strategies for navigating the inevitable trade-offs. For example, being fully transparent about how your AI works might conflict with protecting proprietary data or user privacy. Expert: The study suggests businesses should rank principles to know what’s non-negotiable, proactively map these contradictions, and, most importantly, continuously monitor their AI systems. The technology evolves so fast that your ethics framework has to be a living document, not a one-time project. Host: Fantastic insights. So, to summarize: established AI ethics like fairness and privacy are still vital, but Generative AI has raised the stakes and introduced six new principles that businesses cannot afford to ignore. Host: Leaders need to be proactive in updating their governance to address issues like truthfulness and intellectual property, and adopt a dynamic approach—ranking priorities, managing trade-offs, and continuously monitoring their impact. Host: Alex Ian Sutherland, thank you for making this complex study so clear and actionable for us. Expert: It was my pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time for more on the intersection of business and technology.
Generative AI, AI Ethics, Large Language Models, AI Governance, Ethical Principles, AI Auditing
Communications of the Association for Information Systems (2025)
Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects
Karin Väyrynen, Sari Laari-Salmela, Netta Iivari, Arto Lanamäki, Marianne Kinnula
This study explores how an information technology (IT) artefact evolves into a 'policy object' during the policymaking process, using a 4.5-year longitudinal case study of the Finnish Taximeter Law. The research proposes a conceptual framework that identifies three forms of the artefact as it moves through the policy cycle: a mental construct, a policy text, and a material IT artefact. This framework helps to understand the dynamics and challenges of regulating technology.
Problem
While policymaking related to information technology is increasingly significant, the challenges stemming from the complex, multifaceted nature of IT are poorly understood. There is a specific gap in understanding how real-world IT artefacts are translated into abstract policy texts and how those texts are subsequently reinterpreted back into actionable technologies. This 'translation' process often leads to ambiguity and unintended consequences during implementation.
Outcome
- Proposes a novel conceptual framework for understanding the evolution of an IT artefact as a policy object during a public policy cycle. - Identifies three distinct forms the IT artefact takes: 1) a mental construct in the minds of policymakers and stakeholders, 2) a policy text such as a law, and 3) a material IT artefact as a real-world technology that aligns with the policy. - Highlights the significant challenges in translating complex real-world technologies into abstract legal text and back again, which can create ambiguity and implementation difficulties. - Distinguishes between IT artefacts at the policy level and IT artefacts as real-world technologies, showing how they evolve on separate but interconnected tracks.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. In a world of fast-paced tech innovation, how do laws and policies keep up? Today, we're diving into a fascinating study that unpacks this very question. It's titled "Conceptualizing IT Artefacts for Policymaking – How IT Artefacts Evolve as Policy Objects".
Host: With me is our analyst, Alex Ian Sutherland. Alex, this study looks at how a piece of technology becomes something that policymakers can actually regulate. Why is that important?
Expert: It's crucial, Anna. Technology is complex and multifaceted, but laws are abstract text. The study explores how an IT product evolves as it moves through the policy cycle, using a real-world example of the Finnish Taximeter Law. It shows how challenging, and important, it is to get that translation right.
Host: Let's talk about that challenge. What is the big problem this study addresses?
Expert: The core problem is that policymakers often struggle to understand the technology they're trying to regulate. There's a huge gap in understanding how a real-world IT product, like a ride-sharing app, gets translated into abstract policy text, and then how that text is interpreted back into a real, functioning technology.
Host: So it's a translation issue, back and forth?
Expert: Exactly. And that translation process is full of pitfalls. The study followed the Finnish government's attempt to update their taximeter law. The old law only allowed certified, physical taximeters. But with the rise of apps like Uber, they needed a new law to allow "other devices or systems". The ambiguity in how they wrote that new law created a lot of confusion and unintended consequences.
Host: How did the researchers go about studying this problem?
Expert: They took a very in-depth approach. It was a 4.5-year longitudinal case study. They analyzed over a hundred documents—draft laws, stakeholder statements, meeting notes—and conducted dozens of interviews with regulators, tech providers, and taxi federations. They watched the entire policy cycle unfold in real time.
Host: And after all that research, what were the key findings? What did they learn about how technology evolves into a "policy object"?
Expert: They developed a fantastic framework that identifies three distinct forms the technology takes. First, it exists as a 'mental construct' in the minds of policymakers. It's their idea of what the technology is—for instance, "an app that can calculate a fare".
Host: Okay, so it starts as an idea. What's next?
Expert: That idea is translated into a 'policy text' – the actual law or regulation. This is where it gets tricky. The Finnish law described the new technology based on certain functions, like measuring time and distance to a "corresponding level" of accuracy as a physical taximeter.
Host: That sounds a little vague.
Expert: It was. And that leads to the third form: the 'material IT artefact'. This is the real-world technology that companies build to comply with the law. Because the policy text was ambiguous, a whole range of technologies appeared. Some were sophisticated ride-hailing platforms, but others were just uncertified apps or devices bought online that technically met the vague definition. The study shows these three forms evolve on separate but connected tracks.
Host: This is the critical part for our listeners, Alex. Why does this matter for business leaders and tech innovators today?
Expert: It matters immensely, especially with regulations like the new European AI Act on the horizon. That Act defines what an "AI system" is. That definition—that 'policy text'—will determine whether your company's product is considered high-risk and subject to intense scrutiny and compliance costs.
Host: So, if your product fits the law's definition, you're in a completely different regulatory bracket.
Expert: Precisely. The study teaches us that businesses cannot afford to ignore the policymaking process. You need to engage when the 'mental construct' is being formed, to help policymakers understand the technology's reality. You need to pay close attention to the wording of the 'policy text' to anticipate how it will be interpreted.
Host: And the takeaway for product development?
Expert: Your product—your 'material IT artefact'—exists in the real world, but its legitimacy is determined by the policy world. Businesses must understand that these are two different realms that are often disconnected. The successful companies will be the ones that can bridge that gap, ensuring their innovations align with policy, or better yet, help shape sensible policy from the start.
Host: So, to recap: technology in the eyes of the law isn't just one thing. It's an idea in a regulator's mind, it's the text of a law, and it's the actual product in the market. Understanding how it transforms between these states is vital for navigating the modern regulatory landscape.
Host: Alex, thank you for breaking that down for us. It’s a powerful lens for viewing the intersection of tech and policy.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we translate more knowledge into action.
IT Artefact, IT Regulation, Law, Policy Object, Policy Cycle, Public Policymaking, European Al Act
Communications of the Association for Information Systems (2025)
The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents
Soojin Roh, Shubin Yu
This paper investigates if, when, and how organizations can strategically use emojis in online communications when responding to information system (IS) incidents. Through three experimental studies conducted with Chinese and U.S. participants, the research examines how cultural context, the source of the message (CEO vs. company account), and incident type influence public perception.
Problem
As companies increasingly use emojis in professional communications, there is a risk of missteps, especially in crisis situations. A lack of understanding of how emojis shape public perception across different cultures can lead to reputational harm, and existing research lacks empirical evidence on their strategic and cross-cultural application in responding to IS incidents.
Outcome
- For Chinese audiences, using emojis in IS incident responses is generally positive, as it reduces psychological distance, alleviates anger, and increases perceptions of warmth and competence. - The positive effect of emojis in China is stronger when used by an official company account rather than a CEO, and when the company is responsible for the incident. - In contrast, U.S. audiences tend to evaluate the use of emojis negatively in incident responses. - The negative perception among U.S. audiences is particularly strong when a CEO uses an emoji to respond to an internally-caused incident, leading to increased anger and perceptions of incompetence.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. Today, we're discussing a communication tool we all use daily: the emoji. But what happens when it enters the high-stakes world of corporate crisis management? Host: We're diving into a fascinating new study titled "The Digital Language of Emotion: Cautions and Solutions for Strategic Use of Emoji in Responding Information System Incidents". Host: It investigates if, when, and how organizations can strategically use emojis in online communications when responding to information system incidents, like a data breach or a server crash. I'm your host, Anna Ivy Summers, and joining me is our expert analyst, Alex Ian Sutherland. Expert: Great to be here, Anna. Host: Alex, companies are trying so hard to be relatable on social media. What's the big problem with using a simple emoji when things go wrong? Expert: The problem is that it's a huge gamble without a clear strategy. As companies increasingly use emojis, there's a serious risk of missteps, especially in a crisis. Expert: A lack of understanding of how emojis shape public perception, particularly across different cultures, can lead to significant reputational harm. An emoji meant to convey empathy could be seen as unprofessional or insincere, and there's been very little research to guide companies on this. Host: So it's a digital communication minefield. How did the researchers approach this problem? Expert: They conducted a series of three carefully designed experiments with participants from two very different cultures: China and the United States. Expert: They created realistic crisis scenarios—like a ride-hailing app crashing or a company mishandling user data. Participants were then shown mock social media responses to these incidents. Expert: The key variables were whether the message included an emoji, if it came from the official company account or the CEO, and whether the company was at fault. They then measured how people felt about the company's response. Host: A very thorough approach. Let's get to the results. What were the key findings? Expert: The findings were incredibly clear, and they showed a massive cultural divide. For Chinese audiences, using emojis in a crisis response was almost always viewed positively. Expert: It was found to reduce the psychological distance between the public and the company. This helped to alleviate anger and actually increased perceptions of the company's warmth *and* its competence. Host: That’s surprising. So in China, it seems to be a smart move. I'm guessing the results were different in the U.S.? Expert: Completely different. U.S. audiences consistently evaluated the use of emojis in crisis responses negatively. It didn't build a bridge; it often damaged the company's credibility. Host: Was there a specific scenario where it was particularly damaging? Expert: Yes, the worst combination was a CEO using an emoji to respond to an incident that was the company's own fault. This led to a significant increase in public anger and a perception that the CEO, and by extension the company, was incompetent. Host: That’s a powerful finding. This brings us to the most important question for our listeners: why does this matter for business? Expert: The key takeaway is that your emoji strategy must be culturally intelligent. There is no global, one-size-fits-all rule. Expert: For businesses communicating with a Chinese audience, a well-chosen emoji can be a powerful tool. It's seen as an important non-verbal cue that shows sincerity and a commitment to maintaining the relationship, even boosting perceptions of competence when you're admitting fault. Host: So for Western audiences, the advice is to steer clear? Expert: For the most part, yes. In a low-context culture like the U.S., the public expects directness and professionalism in a crisis. An emoji can trivialize a serious event. Expert: If your company is at fault, and especially if the message is from a leader like the CEO, avoid emojis. The risk of being perceived as incompetent and making customers even angrier is just too high. The focus should be on action and clear communication, not on emotional icons. Host: So, to summarize: when managing a crisis, know your audience. For Chinese markets, an emoji can be an asset that humanizes your brand. For U.S. markets, it can be a liability that makes you look foolish. Context is truly king. Host: Alex Ian Sutherland, thank you for sharing these crucial insights with us today. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights. Join us next time for more on the intersection of business and technology.
Emoji, Information System Incident, Social Media, Psychological Distance, Warmth, Competence
Communications of the Association for Information Systems (2024)
Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective
Prakash Dhavamani, Barney Tan, Daniel Gozman, Leben Johnson
This study investigates how a financial technology (Fintech) ecosystem was successfully established in a resource-constrained environment, using the Vizag Fintech Valley in India as a case study. The research examines the specific processes of gathering resources, building capabilities, and creating market value under significant budget limitations. It proposes a practical framework to guide the development of similar 'frugal' innovation hubs in other developing regions.
Problem
There is limited research on how to launch and develop a Fintech ecosystem, especially in resource-scarce developing countries where the potential benefits like financial inclusion are greatest. Most existing studies focus on developed nations, and their findings are not easily transferable to environments with tight budgets, a lack of specialized talent, and less mature infrastructure. This knowledge gap makes it difficult for policymakers and entrepreneurs to create successful Fintech hubs in these regions.
Outcome
- The research introduces a practical framework for building Fintech ecosystems in resource-scarce settings, called the Frugal Fintech Ecosystem Development (FFED) framework. - The framework identifies three core stages: Structuring (gathering and prioritizing available resources), Bundling (combining resources to build capabilities), and Leveraging (using those capabilities to seize market opportunities). - It highlights five key sub-processes for success in a frugal context: bricolaging (creatively using resources at hand), prioritizing, emulating (learning from established ecosystems), extrapolating, and sandboxing (safe, small-scale experimentation). - The study shows that by orchestrating resources effectively, even frugal ecosystems can achieve outcomes comparable to those in well-funded regions, a concept termed 'equifinality'. - The findings offer an evidence-based guide for policymakers to design regulations and support models that foster sustainable Fintech growth in developing economies.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. In today's interconnected world, innovation hubs are seen as engines of economic growth. But can you build one without massive resources? That's the question at the heart of a fascinating study we're discussing today titled, "Frugal Fintech Ecosystem Development: A Resource Orchestration Perspective".
Host: It investigates how a financial technology, or Fintech, ecosystem was successfully built in a resource-constrained environment in India, proposing a framework that could be a game-changer for developing regions. Here to break it down for us is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: Alex, let's start with the big picture. What's the real-world problem this study is trying to solve?
Expert: The core problem is a major knowledge gap. Everyone talks about the potential of Fintech to drive financial inclusion and economic growth, especially in developing countries. But almost all the research and successful models we have are from well-funded, developed nations like the US or the UK.
Host: And those models don't just copy and paste into a different environment.
Expert: Exactly. A region with a tight budget, a shortage of specialized talent, and less mature infrastructure can't follow the Silicon Valley playbook. The study points out that Fintech startups already have a shockingly high failure rate—around 90% in their first six years. In a resource-scarce setting, that risk is even higher. So, policymakers and entrepreneurs in these areas were essentially flying blind.
Host: So how did the researchers approach this challenge? How did they figure out what a successful frugal model looks like?
Expert: They went directly to the source. They conducted a deep-dive case study of the Vizag Fintech Valley in India. This was a city that, despite significant financial constraints, managed to build a vibrant and successful Fintech hub. The researchers interviewed 26 key stakeholders—everyone from government regulators and university leaders to startup founders and investors—to piece together the story of exactly how they did it.
Host: It sounds like they got a 360-degree view. What were the key findings that came out of this investigation?
Expert: The main output is a practical guide they call the Frugal Fintech Ecosystem Development, or FFED, framework. It breaks the process down into three core stages: Structuring, Bundling, and Leveraging.
Host: Let's unpack that. What happens in the 'Structuring' stage?
Expert: Structuring is all about gathering the resources you have, not the ones you wish you had. In Vizag, this meant repurposing unused land for infrastructure and bringing in a leadership team that had already successfully built a tech hub in a nearby city. It’s about being resourceful from day one.
Host: Okay, so you've gathered your parts. What is 'Bundling'?
Expert: Bundling is where you combine those parts to create real capabilities. For example, Vizag’s leaders built partnerships between universities and companies to train a local, skilled workforce. They connected startups in incubation hubs so they could learn from each other. They were actively building the engine of the ecosystem.
Host: Which brings us to 'Leveraging'. I assume that's when the engine starts to run?
Expert: Precisely. Leveraging is using those capabilities to seize market opportunities and create value. A key part of this was a concept the study highlights called 'sandboxing'.
Host: Sandboxing? That sounds intriguing.
Expert: It's essentially creating a safe, controlled environment where Fintech firms can experiment with new technologies on a small scale. Regulators in Vizag allowed startups to test blockchain solutions for government services, for instance. This lets them prove their concept and work out the kinks without huge risk, which is critical when you can't afford big failures.
Host: That makes perfect sense. Alex, this is the most important question for our audience: Why does this matter for business? What are the practical takeaways?
Expert: This is a playbook for smart, sustainable growth. For policymakers in emerging economies, it shows you don't need a blank check to foster innovation. The focus should be on orchestrating resources—connecting academia with industry, creating mentorship networks, and enabling safe experimentation.
Host: And for entrepreneurs or investors?
Expert: For entrepreneurs, the message is that resourcefulness trumps resources. This study proves you can build a successful company outside of a major, well-funded hub by creatively using what's available locally. For investors, it's a clear signal to look for opportunities in these frugal ecosystems. Vizag attracted over 900 million dollars in investment in its first year. That shows that effective organization and a frugal mindset can generate returns just as impressive as those in well-funded regions. The study calls this 'equifinality'—the idea that you can reach the same successful outcome through a different, more frugal path.
Host: So, to sum it up: building a thriving tech hub on a budget isn't a fantasy. By following a clear framework of structuring, bundling, and leveraging resources, and by using clever tactics like sandboxing, regions can create their own success stories.
Expert: That's it exactly. It’s a powerful and optimistic model for global innovation.
Host: A fantastic insight. Thank you so much for your time and expertise, Alex.
Expert: My pleasure, Anna.
Host: And thanks to all our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
Fintech Ecosystem, India, Frugal Innovation, Resource Orchestration, Case Study
Communications of the Association for Information Systems (2024)
Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees
This study conducts a systematic literature review to comprehensively explore the implications of Artificial Intelligence (AI) on employee privacy. It utilizes the privacy calculus framework to analyze the trade-offs organizations and employees face when integrating AI technologies in the workplace. The research evaluates how different types of AI technologies compromise or safeguard privacy and discusses their varying impacts.
Problem
The rapid and pervasive adoption of AI in the workplace has enhanced efficiency but also raised significant concerns regarding employee privacy. There is a research gap in holistically understanding the broad implications of advancing AI technologies on employee privacy, as previous studies often focus on narrow applications without a comprehensive theoretical framework.
Outcome
- The integration of AI in the workplace presents a trade-off, offering benefits like objective performance evaluation while posing significant risks such as over-surveillance and erosion of trust. - The study categorizes AI into four advancing types (descriptive, predictive, prescriptive, and autonomous), each progressively increasing the complexity of privacy challenges and altering the employee privacy calculus. - As AI algorithms become more advanced and opaque, it becomes more difficult for employees to understand how their data is used, leading to feelings of powerlessness and potential resistance. - The paper identifies a significant lack of empirical research specifically on AI's impact on employee privacy, as opposed to the more widely studied area of consumer privacy. - To mitigate privacy risks, the study recommends practical strategies for organizations, including transparent communication about data practices, involving employees in AI system design, and implementing strong ethical AI frameworks.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers.
Host: Today, we’re diving into a topic that’s becoming more relevant every day: the privacy of employees in an AI-driven workplace. We'll be discussing a fascinating study titled "Watch Out, You are Live! Toward Understanding the Impact of AI on Privacy of Employees".
Host: Here to unpack this for us is our analyst, Alex Ian Sutherland. Alex, welcome to the show.
Expert: Thanks for having me, Anna.
Host: To start, what is this study all about? What question were the researchers trying to answer?
Expert: At its core, this study explores the complex relationship between artificial intelligence and employee privacy. As companies integrate more AI, the researchers wanted to understand the trade-offs that both organizations and employees have to make, evaluating how different types of AI technologies can either compromise or, in some cases, safeguard our privacy at work.
Host: That sounds incredibly timely. So, what is the big, real-world problem that prompted this investigation?
Expert: The problem is that AI is being adopted in the workplace at a breathtaking pace. It's fantastic for efficiency, but it's also creating massive concerns about privacy. Think about it: AI can monitor everything from keystrokes to break times. The study points out that while there’s been a lot of focus on specific AI tools, there hasn't been a big-picture, holistic look at the overall impact on employees.
Host: Can you give us a concrete example of the kind of monitoring we're talking about?
Expert: Absolutely. The study mentions systems with names like "WorkSmart" or "Silent Watch" that provide employers with data on literally every keystroke an employee makes. Another example is AI that analyzes email response rates or time spent on websites. For employees, this can feel like constant, intrusive surveillance, leading to stress and a feeling of being watched all the time.
Host: That's a powerful image. So, how did the researchers go about studying such a broad and complex issue?
Expert: They conducted what’s called a systematic literature review. Essentially, they acted as detectives, compiling and analyzing dozens of existing studies on AI and employee privacy from the last two decades. By synthesizing all this information, they were able to build a comprehensive map of the current landscape, identify the key challenges, and point out where the research gaps are.
Host: And what did this synthesis reveal? What were the key findings?
Expert: There were several, but a few really stand out. First, the study confirms this idea of a "privacy calculus" — a constant trade-off. On one hand, AI can offer benefits like more objective and unbiased performance evaluations. But the cost is often over-surveillance and an erosion of trust between employees and management.
Host: So it's a double-edged sword. What else?
Expert: A crucial finding is that not all AI is created equal when it comes to privacy risks. The researchers categorize AI into four advancing types: descriptive, predictive, prescriptive, and autonomous. Each step up that ladder increases the complexity of the privacy challenges.
Host: Can you break that down for us? What’s the difference between, say, descriptive and prescriptive AI?
Expert: Of course. Descriptive AI looks at the past—it might track your sales calls to create a performance report. It describes what happened. Prescriptive AI, however, takes it a step further. It doesn’t just analyze data; it recommends or even takes action. The study cites a real-world example where an AI system automatically sends termination warnings to warehouse workers who don't meet productivity quotas, with no human intervention.
Host: Wow. That's a significant leap. It really highlights another one of the study's findings, which is that as these algorithms get more complex, they become harder for employees to understand.
Expert: Exactly. They become an opaque "black box." Employees don't know how their data is being used or why the AI is making certain decisions. This naturally leads to feelings of powerlessness and can cause them to resist the very technology that’s meant to improve efficiency.
Host: This all leads to the most important question for our listeners. Based on this study, what are the practical takeaways for business leaders? Why does this matter for them?
Expert: This is the critical part. The study offers clear, actionable strategies. The number one takeaway is the need for radical transparency. Businesses must communicate clearly about what data they are collecting, how the AI systems use it, and what the benefits are for everyone. Hiding it won't work.
Host: So, transparency is key. What else should leaders be doing?
Expert: They need to involve employees in the process. The study recommends a participatory approach to designing and implementing AI systems. When you include your team, you can address privacy concerns from the outset and build tools that feel supportive, not oppressive. This fosters a sense of ownership and trust.
Host: That makes perfect sense. Are there any other recommendations?
Expert: Yes, the final piece is to implement strong, ethical AI frameworks. This goes beyond just being legally compliant. It means building privacy and fairness into the DNA of your technology strategy. It’s about ensuring that the quest for efficiency doesn't come at the cost of your company's culture and your employees' well-being.
Host: So, to summarize: AI in the workplace presents a fundamental trade-off between efficiency and privacy. For business leaders, the path forward isn't to avoid AI, but to manage this trade-off proactively through transparency, employee involvement, and a strong ethical foundation.
Host: Alex, this has been incredibly insightful. Thank you for breaking down this complex topic for us today.
Expert: My pleasure, Anna. It's a vital conversation to be having.
Host: And to our listeners, thank you for joining us on A.I.S. Insights — powered by Living Knowledge. We’ll see you next time.
This study analyzes IBM's strategic dilemma with its Watson Health initiative, which aimed to monetize artificial intelligence for cancer detection and treatment recommendations. It explores whether IBM should continue its specialized focus on healthcare (a vertical strategy) or reposition Watson as a versatile, cross-industry AI platform (a horizontal strategy). The paper provides insights into the opportunities and challenges associated with unlocking the transformational power of AI in a business context.
Problem
Despite a multi-billion dollar investment and initial promise, IBM's Watson Health struggled with profitability, model accuracy, and scalability. The AI's recommendations were not consistently reliable or generalizable across different patient populations and healthcare systems, leading to poor adoption. This created a critical strategic crossroads for IBM: whether to continue investing heavily in the specialized healthcare vertical or to pivot towards a more scalable, general-purpose AI platform to drive future growth.
Outcome
- Model Accuracy & Bias: Watson's performance was inconsistent, and its recommendations, trained primarily on US data, were not always applicable to international patient populations, revealing significant algorithmic bias. - Lack of Explainability: The 'black box' nature of the AI made it difficult for clinicians to trust its recommendations, hindering adoption as they could not understand its reasoning process. - Integration and Scaling Challenges: Integrating Watson into existing hospital workflows and electronic health records was costly and complex, creating significant barriers to widespread implementation. - Strategic Dilemma: The challenges forced IBM to choose between continuing its high-investment vertical strategy in healthcare, pivoting to a more scalable horizontal cross-industry platform, or attempting a convergence of both approaches.
Host: Welcome to A.I.S. Insights, the podcast powered by Living Knowledge, where we translate complex research into actionable business strategy. I'm your host, Anna Ivy Summers.
Host: Today, we're diving into a fascinating study titled "IBM Watson Health Growth Strategy: Is Artificial Intelligence (AI) The Answer". It analyzes one of the most high-profile corporate AI ventures in recent memory.
Host: This analysis explores the strategic dilemma IBM faced with Watson Health, its ambitious initiative to use AI for cancer detection and treatment. The core question: should IBM double down on this specialized healthcare focus, or pivot to a more versatile, cross-industry AI platform?
Host: With me to unpack this is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Glad to be here, Anna.
Host: So, Alex, IBM's Watson became famous for winning on the game show Jeopardy. The move into healthcare seemed like a noble and brilliant next step. What was the big problem they were trying to solve?
Expert: It was a massive problem. The amount of medical research and data is exploding. It's impossible for any single doctor to keep up with it all. IBM's vision was for Watson to ingest millions of research articles, clinical trial results, and patient records to help oncologists make better, more personalized treatment recommendations.
Host: A truly revolutionary idea. But the study suggests that despite billions of dollars in investment, the reality was quite different.
Expert: That's right. Watson Health struggled significantly with profitability and adoption. The AI's recommendations weren't as reliable or as useful as promised, which created a critical crossroads for IBM. They had to decide whether to keep pouring money into this very specific healthcare vertical or to change their entire strategy.
Host: How did the researchers in this study approach such a complex business case?
Expert: The study is a deep strategic analysis. It examines IBM's business model, its technology, and the market environment. The authors reviewed everything from internal strategy components and partnerships with major cancer centers to the specific technological hurdles Watson faced. It's essentially a case study on the immense challenges of monetizing a "moonshot" AI project.
Host: Let's get into those challenges. What were some of the key findings?
Expert: A major one was model accuracy and bias. The study highlights that Watson was primarily trained using patient data from one institution, Memorial Sloan Kettering Cancer Center in the US. This meant its recommendations didn't always translate well to different patient populations, especially internationally.
Host: So, an AI trained in New York might not be effective for a patient in Tokyo or Mumbai?
Expert: Precisely. This revealed a significant algorithmic bias. For example, one finding mentioned in the analysis showed a mismatch rate of over 27% between Watson's suggestions and the actual treatments given to cervical cancer patients in China. That's a critical failure when you're dealing with patient health.
Host: That naturally leads to the issue of trust. How did doctors react to this new tool?
Expert: That was the second major hurdle: a lack of explainability. Doctors called it the 'black box' problem. Watson would provide a ranked list of treatments, but it couldn't clearly articulate the reasoning behind its top choice. Clinicians need to understand the 'why' to trust a recommendation, and without that transparency, adoption stalled.
Host: And beyond trust, were there practical, on-the-ground problems?
Expert: Absolutely. The study points to massive integration and scaling challenges. Integrating Watson into a hospital's existing complex workflows and electronic health records was incredibly difficult and expensive. The partnership with MD Anderson Cancer Center, for instance, struggled because Watson couldn't properly interpret doctors' unstructured notes. It wasn't a simple plug-and-play solution.
Host: This is a powerful story. For our listeners—business leaders, strategists, tech professionals—what's the big takeaway? Why does the Watson Health story matter for them?
Expert: There are a few key lessons. First, it's a cautionary tale about managing hype. IBM positioned Watson as a revolution, but the technology wasn't there yet. This created a gap between promise and reality that damaged its credibility.
Host: So, under-promise and over-deliver, even with exciting new tech. What else?
Expert: The second lesson is that technology, no matter how powerful, is not a substitute for deep domain expertise. The nuances of medicine—patient preferences, local treatment availability, the context of a doctor's notes—were things Watson struggled with. You can't just apply an algorithm to a complex field and expect it to work without genuine, human-level understanding.
Host: And what about that core strategic dilemma the study focuses on—this idea of a vertical versus a horizontal strategy?
Expert: This is the most critical takeaway for any business investing in AI. IBM chose a vertical strategy—a deep, specialized solution for one industry. The study shows how incredibly high-risk and expensive that can be. The alternative is a horizontal strategy: building a general, flexible AI platform that other companies can adapt for their own needs. It's a less risky, more scalable approach, and it’s the path that competitors like Google and Amazon have largely taken.
Host: So, to wrap it up: IBM's Watson Health was a bold and ambitious vision to transform cancer care with AI.
Host: But this analysis shows its struggles were rooted in very real-world problems: data bias, the 'black box' issue of trust, and immense practical challenges with integration.
Host: For business leaders, the story is a masterclass in the risks of a highly-specialized vertical AI strategy and a reminder that the most advanced technology is only as good as its understanding of the people and processes it's meant to serve.
Host: Alex, thank you so much for breaking down this complex topic for us.
Expert: My pleasure, Anna.
Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Artificial Intelligence (AI), AI Strategy, Watson, Healthcare AI, Vertical AI, Horizontal AI, AI Ethics
Communications of the Association for Information Systems (2025)
Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective
David Horneber
This study conducts a literature review to understand why organizations struggle to effectively implement Responsible Artificial Intelligence (AI). Using a neo-institutional theory framework, the paper analyzes institutional pressures, common challenges, and the roles that AI practitioners play in either promoting or hindering the adoption of responsible AI practices.
Problem
Despite growing awareness of AI's ethical and social risks and the availability of responsible AI frameworks, many organizations fail to translate these principles into practice. This gap between stated policy and actual implementation means that the goals of making AI safe and ethical are often not met, creating significant risks for businesses and society while undermining trust.
Outcome
- A fundamental tension exists between the pressures to adopt Responsible AI (e.g., legal compliance, reputation) and inhibitors (e.g., market demand for functional AI, lack of accountability), leading to ineffective, symbolic implementation. - Ineffectiveness often takes two forms: 'policy-practice decoupling' (policies are adopted for show but not implemented) and 'means-end decoupling' (practices are implemented but fail to achieve their intended ethical goals). - AI practitioners play crucial roles as either 'institutional custodians' who resist change to preserve existing technical practices, or as 'institutional entrepreneurs' who champion the implementation of Responsible AI. - The study concludes that a bottom-up approach by motivated practitioners is insufficient; effective implementation requires strong organizational support, clear structures, and proactive processes to bridge the gap between policy and successful outcomes.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge, where we translate complex research into actionable business intelligence. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating study titled "Understanding the Implementation of Responsible Artificial Intelligence in Organizations: A Neo-Institutional Theory Perspective." Host: It explores why so many organizations seem to struggle with putting their responsible AI principles into actual practice, looking at the pressures, the challenges, and the key roles people play inside these companies. Host: With me is our analyst, Alex Ian Sutherland, who has taken a deep dive into this study. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, we hear a lot about AI ethics and all these new responsible AI frameworks. But this study suggests there’s a massive gap between what companies *say* they'll do and what they *actually* do. What's the core problem here? Expert: That's the central issue. The study finds that despite growing awareness of AI's risks, the principles often remain just that—principles on a webpage. This gap between policy and practice means the goals of making AI safe and ethical are not being met. Expert: This creates huge risks, not just for society, but directly for the businesses themselves. It undermines customer trust and leaves them exposed to future legal and reputational damage. Host: So how did the researchers approach such a complex organizational problem? Expert: They conducted a comprehensive literature review, synthesizing the findings from dozens of real-world, empirical studies on the topic. Then, they analyzed this collective evidence through a specific lens called neo-institutional theory. Host: That sounds a bit academic. Can you break that down for us? Expert: Absolutely. In simple terms, it's a way of understanding how organizations respond to external pressures—from society, from regulators—to appear legitimate. Sometimes, this means they adopt policies for show, even if their internal day-to-day work doesn't change. Host: That makes sense. It’s about looking the part. So, using that lens, what were the most significant findings from the study? Expert: There were three that really stood out. First, there's a fundamental tension at play. On one side, you have pressures pushing for responsible AI, like legal compliance and protecting the company's reputation. On the other, you have inhibitors, like market demand for AI that just *works*, regardless of ethics, and a lack of real accountability. Host: And this tension leads to problems? Expert: Exactly. It leads to something the study calls 'decoupling'. The most common form is 'policy-practice decoupling'. This is when a company adopts a great-sounding ethics policy, but the engineering teams on the ground never actually implement it. Expert: The second, more subtle form is 'means-end decoupling'. This is when teams *do* implement a practice, like a bias check, but it's done in a superficial way that doesn't actually achieve the ethical goal. It's essentially just ticking a box. Host: So there's a disconnect. What was the second key finding? Expert: It’s about the people on the ground: the AI practitioners. The study found they fall into two distinct roles. They are either 'institutional custodians' or 'institutional entrepreneurs'. Expert: 'Custodians' are those who resist change to protect existing practices. Think of a product manager who argues that ethical considerations slow down development and hurt performance. They maintain the status quo. Expert: 'Entrepreneurs', on the other hand, are the champions. They are the ones who passionately advocate for responsible AI, often taking it on themselves without a formal mandate because they believe it's the right thing to do. Host: Which leads us to the third point, which I imagine is that these champions can't do it alone? Expert: Precisely. The study concludes that this bottom-up approach, relying on a few passionate individuals, is not enough. For responsible AI to be effective, it requires strong, top-down organizational support, clear structures, and proactive processes. Host: This is the crucial part for our listeners. For a business leader, what are the practical takeaways here? Why does this matter? Expert: First, leaders need to conduct an honest assessment. Are your responsible AI efforts real, or are they just symbolic? Creating a policy to look good, without giving your teams the time, resources, and authority to implement it, is setting them—and the company—up for failure. Host: So it's about moving beyond lip service to avoid real business risk. Expert: Exactly. Second, find and empower your 'institutional entrepreneurs'. The study shows these champions often face immense stress and burnout. So, formalize their roles. Give them authority, a budget, and a direct line to leadership. Don't let their goodwill be the only thing powering your ethics strategy. Host: And the final takeaway? Expert: Be proactive, not reactive. You can't bolt on ethics at the end. The study suggests building responsible AI structures that are both centralized and decentralized. A central team can provide resources and set standards, but you also need experts embedded *within* each development team to manage risks from the very beginning. Host: That’s incredibly clear. So, to summarize: there's a major gap between AI policy and practice, driven by competing business pressures. This results in actions that are often just for show. Host: And while passionate employees can drive change from the bottom up, they will ultimately fail without sincere, structural support from leadership. Host: Alex, thank you so much for breaking down this complex but incredibly important study for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning in to A.I.S. Insights, powered by Living Knowledge.
Artificial Intelligence, Responsible AI, AI Ethics, Organizations, Neo-Institutional Theory
Journal of the Association for Information Systems (2026)
Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability
Karen Stendal, Maung K. Sein, Devinder Thapa
This study explores how individuals with lifelong disabilities (PWLD) use virtual worlds, specifically Second Life, to achieve social inclusion. Using a qualitative approach with in-depth interviews and participant observation, the researchers analyzed how PWLD experience the platform's features. The goal was to develop a model explaining the process through which technology facilitates greater community participation and interpersonal connection for this marginalized group.
Problem
People with lifelong disabilities often face significant social isolation and exclusion due to physical, mental, or sensory impairments that hinder their full participation in society. This lack of social connection can negatively impact their psychological and emotional well-being. This research addresses the gap in understanding the specific mechanisms by which technology, like virtual worlds, can help this population move from isolation to inclusion.
Outcome
- Virtual worlds offer five key 'affordances' (action possibilities) that empower people with lifelong disabilities (PWLD). - Three 'functional' affordances were identified: Communicability (interacting without barriers like hearing loss), Mobility (moving freely without physical limitations), and Personalizability (controlling one's digital appearance and whether to disclose a disability). - These functional capabilities enable two 'social' affordances: Engageability (the ability to join in social activities) and Self-Actualizability (the ability to realize one's potential and help others). - The study proposes an 'Affordance-Based Pathway Model' which shows how using these features helps PWLD build interpersonal relationships and participate in communities, leading to social inclusion.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers, and with me today is our expert analyst, Alex Ian Sutherland. Host: Alex, today we're diving into a fascinating study from the Journal of the Association for Information Systems titled, "Affordance-Based Pathway Model of Social Inclusion: A Case Study of Virtual Worlds and People With Lifelong Disability". Host: In short, it explores how people with lifelong disabilities use virtual worlds, like the platform Second Life, to achieve social inclusion and build community. Host: So, Alex, before we get into the virtual world, let's talk about the real world. What is the core problem this study is trying to address? Expert: Anna, it addresses a significant challenge. People with lifelong disabilities often face profound social isolation. Physical, mental, or sensory barriers can prevent them from fully participating in society, which in turn impacts their psychological and emotional well-being. Expert: While we know technology can help, there’s been a gap in understanding the specific mechanisms—the 'how'—technology can create a pathway from isolation to inclusion for this group. Host: It sounds like a complex challenge to study. So how did the researchers approach this? Expert: They took a very human-centered approach. They went directly into the virtual world of Second Life and conducted in-depth interviews and participant observations with 18 people with lifelong disabilities. This allowed them to understand the lived experiences of both new and experienced users. Host: And what did they find? What is it about these virtual worlds that makes such a difference? Expert: They discovered that the platform offers five key 'affordances'—which is simply a term for the action possibilities or opportunities that the technology makes possible for these users. They grouped them into two categories: functional and social. Host: Okay, five key opportunities. Can you break down the first category, the functional ones, for us? Expert: Absolutely. The first three are foundational. There’s 'Communicability'—the ability to interact without barriers. One participant with hearing loss noted that text chat made it easier to interact because they didn't need sign language. Expert: Second is 'Mobility'. This is about moving freely without physical limitations. A participant who uses a wheelchair in real life shared this powerful thought: "In real life I can't dance; here I can dance with the stars." Expert: The third is 'Personalizability'. This is the user's ability to control their digital appearance through an avatar, and importantly, to choose whether or not to disclose their disability. It puts them in control of their identity. Host: So those three—Communicability, Mobility, and Personalizability—are the functional building blocks. How do they lead to actual social connection? Expert: They directly enable the two 'social' affordances. The first is 'Engageability'—the ability to actually join in social activities and be part of a group. Expert: This then leads to the final and perhaps most profound affordance: 'Self-Actualizability'. This is the ability to realize one's potential and contribute to the well-being of others. For example, a retired teacher in the study found new purpose in helping new users get started on the platform. Host: This is incredibly powerful on a human level. But Alex, this is a business and technology podcast. What are the practical takeaways here for business leaders? Expert: This is where it gets very relevant. First, for any company building in the metaverse or developing collaborative digital platforms, this study is a roadmap for truly inclusive design. It shows that you need to intentionally design for features that enhance communication, freedom of movement, and user personalization. Host: So it's a model for product development in these new digital spaces. Expert: Exactly. And it also highlights an often-overlooked user base. Designing for inclusivity isn't just a social good; it opens up your product to a massive global market. Businesses can also apply these principles internally to create more inclusive remote work environments, ensuring employees with disabilities can fully participate in digital collaboration and company culture. Host: That’s a fantastic point about corporate applications. Is there anything else? Expert: Yes, and this is a critical takeaway. The study emphasizes that technology alone is not a magic bullet. The users succeeded because of what the researchers call 'facilitating conditions'—things like peer support, user training, and community helpers. Expert: For businesses, the lesson is clear: you can't just launch a product. You need to build and foster the support ecosystem and the community around it to ensure users can truly unlock its value. Host: Let’s recap then. Virtual worlds can be a powerful tool for social inclusion by providing five key opportunities: three functional ones that enable two social ones. Host: And for businesses, the key takeaways are to design intentionally for inclusivity, recognize this valuable user base, and remember to build the support system, not just the technology itself. Host: Alex Ian Sutherland, thank you for breaking this down for us. It’s a powerful reminder that technology is ultimately about people. Host: And thank you to our audience for tuning into A.I.S. Insights — powered by Living Knowledge.
Social Inclusion, Virtual Worlds (VW), People With Lifelong Disability (PWLD), Affordances, Second Life, Assistive Technology, Qualitative Study