AIS Logo
Living knowledge for digital leadership
All AI Governance & Ethics Digital Transformation & Innovation Supply Chain & IoT SME & IT Management Platform Ecosystems & Strategy Cybersecurity & Risk AI Applications & Technologies Healthcare & Well-being Digital Work & Collaboration
A Metaverse-Based Proof of Concept for Innovation in Distributed Teams

A Metaverse-Based Proof of Concept for Innovation in Distributed Teams

Rosemary Francisco, Sharon Geeling, Grant Oosterwyk, Carolyn Tauro, Gerard De Leoz
This study describes a proof of concept exploring how a metaverse environment can support more dynamic innovation in distributed teams. During a three-day immersive workshop, researchers found that avatar-based interaction, informal movement, and gamified facilitation enhanced engagement and ideation. The immersive environment enabled cross-location collaboration and unconventional idea sharing, though challenges like onboarding difficulties and platform limitations were also noted.

Problem Distributed teams often struggle to recreate the creative energy and spontaneous collaboration found in co-located settings, which are critical for innovation. Traditional virtual tools like video conferencing platforms are often too structured, limiting the informal interactions, trust, and psychological safety necessary for effective brainstorming and knowledge sharing. This gap hinders the ability of remote and hybrid teams to generate novel, breakthrough ideas.

Outcome - Psychological safety was enhanced: The immersive setting lowered social pressure, encouraging participants to share unconventional ideas without fear of judgment.
- Creativity and engagement were enhanced: The spatial configuration of the metaverse fostered free movement and peripheral awareness of conversations, creating informal cues for knowledge exchange.
- Mixed teams improved group dynamics: Teams composed of employees from different locations produced more diverse and unexpected solutions compared to past site-specific workshops.
- Combining tools facilitated collaboration: Integrating the metaverse platform with a visual collaboration tool (Miro) compensated for feature limitations and supported both structured brainstorming and visual idea organization.
- Addressing barriers to adoption was important: Early technical onboarding reduced initial skepticism and enabled participants to engage confidently in the immersive environment.
- Facilitation was essential to sustain engagement: Innovation leaders acting as facilitators were crucial for guiding discussions, maintaining momentum, and ensuring inclusive participation.
metaverse, distributed teams, virtual collaboration, innovation, psychological safety, proof of concept, immersive environments
Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition

Possible, Probable and Preferable Futures for Integrating Artificial Intelligence into Talent Acquisition

Laura Bayor, Christoph Weinert, Tina Ilek, Christian Maier, Tim Weitzel
This study explores the integration of Artificial Intelligence (AI) into the talent acquisition (TA) process to guide organizations toward a better future of work. Using a Delphi study with C-level TA experts, the research identifies, evaluates, and categorizes AI opportunities and challenges into possible, probable, and preferable futures, offering actionable recommendations.

Problem Acquiring skilled employees is a major challenge for businesses, and traditional talent acquisition processes are often labor-intensive and inefficient. While AI offers a solution, many organizations are uncertain about how to effectively integrate it, facing the risk of falling behind competitors if they fail to adopt the right strategies.

Outcome - The study identifies three primary business goals for integrating AI into talent acquisition: finding the best-fit candidates, making HR tasks more efficient, and attracting new applicants.
- Key preferable AI opportunities include automated interview scheduling, AI-assisted applicant ranking, identifying and reaching out to passive candidates ('cold talent'), and optimizing job posting content for better reach and diversity.
- Significant challenges that organizations must mitigate include data privacy and security issues, employee and stakeholder distrust of AI, technical integration hurdles, potential for bias in AI systems, and ethical concerns.
- The paper recommends immediate actions such as implementing AI recommendation agents and chatbots, and future actions like standardizing internal data, ensuring AI transparency, and establishing clear lines of accountability for AI-driven hiring decisions.
Artificial Intelligence, Talent Acquisition, Human Resources, Recruitment, Delphi Study, Future of Work, Strategic HR Management
Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law

Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.

Problem While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.

Outcome - Transparency, such as providing clear source citations, was a key factor in building user trust.
- Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust.
- Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness.
- A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Generative Artificial Intelligence, Human-GenAI Collaboration, Trust, GenAI Adoption
The Double-Edged Sword: Empowerment and Risks of Platform-Based Work for Women

The Double-Edged Sword: Empowerment and Risks of Platform-Based Work for Women

Tatjana Hödl and Irina Boboschko
This conceptual paper explores how platform-based work, which offers flexible arrangements, can empower women, particularly those with caregiving responsibilities. Using case examples like mum bloggers, OnlyFans creators, and crowd workers, the study examines both the benefits and the inherent risks of this type of employment, highlighting its dual nature.

Problem Traditional employment structures are often too rigid for women, who disproportionately handle unpaid caregiving and domestic tasks, creating significant barriers to career advancement and financial independence. While platform-based work presents a flexible alternative, it is crucial to understand whether this model truly empowers women or introduces new forms of precariousness that reinforce existing gender inequalities.

Outcome - Platform-based work empowers women by offering financial independence, skill development, and the flexibility to manage caregiving responsibilities.
- This form of work is a 'double-edged sword,' as the benefits are accompanied by significant risks, including job insecurity, lack of social protections, and unpredictable income.
- Women in platform-based work face substantial mental health risks from online harassment and financial instability due to reliance on opaque platform algorithms and online reputations.
- Rather than dismantling unequal power structures, platform-based work can reinforce traditional gender roles, confine women to the domestic sphere, and perpetuate financial dependency.
Women, platform-based work, empowerment, risks, gig economy, digital labor, gender inequality
Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Discerning Truth: A Qualitative Comparative Analysis of Reliance on AI Advice in Deepfake Detection

Christiane Ernst
This study investigates how individuals rely on AI advice when trying to detect deepfake videos. Using a judge-advisor system, participants first made their own judgment about a video's authenticity and then were shown an AI tool's evaluation, after which they could revise their decision. The research used Qualitative Comparative Analysis to explore how factors like AI literacy, trust, and algorithm aversion influence the decision to rely on the AI's advice.

Problem Recent advancements in AI have led to the creation of hyper-realistic deepfakes, making it increasingly difficult for people to distinguish between real and manipulated media. This poses serious threats, including the rapid spread of misinformation, reputational damage, and the potential destabilization of political systems. There is a need to understand how humans interact with AI detection tools to build more effective countermeasures.

Outcome - A key finding is that participants only changed their initial decision when the AI tool indicated that a video was genuine, not when it flagged a deepfake.
- This suggests users are more likely to use AI tools to confirm authenticity rather than to reliably detect manipulation, raising concerns about unreflective acceptance of AI advice.
- Reliance on the AI's advice that a video was genuine was driven by specific combinations of factors, occurring when individuals had either high aversion to algorithms, low trust, or high AI literacy.
Deepfake, Reliance on AI Advice, Qualitative Comparative Analysis (QCA), Human-AI Collaboration
AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

AI-Powered Teams: How the Usage of Generative AI Tools Enhances Knowledge Transfer and Knowledge Application in Knowledge-Intensive Teams

Olivia Bruhin, Luc Bumann, Philipp Ebel
This study investigates the role of Generative AI (GenAI) tools, such as ChatGPT and GitHub Copilot, in software development teams. Through an empirical study with 80 software developers, the research examines how GenAI usage influences key knowledge management processes—knowledge transfer and application—and the subsequent effect on team performance.

Problem While the individual productivity gains from GenAI tools are increasingly recognized, their broader impact on team-level knowledge management and performance remains poorly understood. This gap poses a risk for businesses, as adopting these technologies without understanding their collaborative effects could lead to unintended consequences like reduced knowledge retention or impaired team dynamics.

Outcome - The use of Generative AI (GenAI) tools significantly enhances both knowledge transfer (sharing) and knowledge application within software development teams.
- GenAI usage has a direct positive impact on overall team performance.
- The performance improvement is primarily driven by the team's improved ability to apply knowledge, rather than just the transfer of knowledge alone.
- The findings highlight GenAI's role as a catalyst for innovation, but stress that knowledge gained via AI must be actively and contextually applied to boost team performance effectively.
Human-AI Collaboration, AI in Knowledge Work, Collaboration, Generative AI, Software Development, Team Performance, Knowledge Management
Metrics for Digital Group Workspaces: A Replication Study

Metrics for Digital Group Workspaces: A Replication Study

Petra Schubert and Martin Just
This study replicates a 2014 paper by Jeners and Prinz to test if their metrics for analyzing user activity in digital workspaces are still valid and generalizable. Using data from a modern academic collaboration system, the researchers re-applied metrics like activity, productivity, and cooperativity, and developed an analytical dashboard to visualize the findings.

Problem With the rise of remote and hybrid work, digital collaboration tools are more important than ever. However, these tools generate vast amounts of user activity data ('digital traces') but offer little support for analyzing it, leaving managers without a clear understanding of how teams are collaborating and using these digital spaces.

Outcome - The original metrics for measuring activity, productivity, and cooperativity in digital workspaces were confirmed to be effective and applicable to modern collaboration software.
- The study confirmed that a small percentage of users (around 20%) typically account for the majority of activity (around 80%) in project and organizational workspaces, following a Pareto distribution.
- The researchers extended the original method by incorporating Collaborative Work Codes (CWC), which provide a more detailed and nuanced way to identify different types of work happening in a space (e.g., retrieving information vs. discussion).
- Combining time-based activity profiles with these new work codes proved to be a robust method for accurately identifying and profiling different types of workspaces, such as projects, organizational units, and teaching courses.
Collaboration Analytics, Enterprise Collaboration Systems, Group Workspaces, Digital Traces, Replication Study
Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Revisiting the Responsibility Gap in Human-AI Collaboration from an Affective Agency Perspective

Jonas Rieskamp, Annika Küster, Bünyamin Kalyoncuoglu, Paulina Frieda Saffer, and Milad Mirbabaie
This study investigates how responsibility is understood and assigned when artificial intelligence (AI) systems influence decision-making processes. Using qualitative interviews with experts across various sectors, the research explores how human oversight and emotional engagement (affective agency) shape accountability in human-AI collaboration.

Problem As AI systems become more autonomous in fields from healthcare to finance, a 'responsibility gap' emerges. It becomes difficult to assign accountability for errors or outcomes, as responsibility is diffused among developers, users, and the AI itself, challenging traditional models of liability.

Outcome - Using AI does not diminish human responsibility; instead, it often intensifies it, requiring users to critically evaluate and validate AI outputs.
- Most professionals view AI as a supportive tool or 'sparring partner' rather than an autonomous decision-maker, maintaining that humans must have the final authority.
- The uncertainty surrounding how AI works encourages users to be more cautious and critical, which helps bridge the responsibility gap rather than leading to blind trust.
- Responsibility remains anchored in human oversight, with users feeling accountable not only for the final decision but also for how the AI was used to reach it.
Artificial Intelligence (AI), Responsibility Gap, Responsibility in Human-AI collaboration, Decision-Making, Sociomateriality, Affective Agency
Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Bridging Mind and Matter: A Taxonomy of Embodied Generative AI

Jan Laufer, Leonardo Banh, Gero Strobel
This study develops a comprehensive classification system, or taxonomy, for Embodied Generative AI—AI that can perceive, reason, and act in physical systems like robots. The taxonomy was created through a systematic literature review and an analysis of 40 real-world examples of this technology. The resulting framework provides a structured way to understand and categorize the various dimensions of AI integrated into physical forms.

Problem As Generative AI (GenAI) moves from digital content creation to controlling physical agents, there has been a lack of systematic classification and evaluation methods. While many studies focus on specific applications, a clear framework for understanding the core characteristics and capabilities of these embodied AI systems has been missing. This gap makes it difficult for researchers and practitioners to compare, analyze, and optimize emerging applications in fields like robotics and automation.

Outcome - The study created a detailed taxonomy for Embodied Generative AI to systematically classify its characteristics.
- This taxonomy is structured into three main categories (meta-characteristics): Embodiment, Intelligence, and System.
- It further breaks down these categories into 16 dimensions and 50 specific characteristics, providing a comprehensive framework for analysis.
- The framework serves as a foundational tool for future research and helps businesses and developers make informed decisions when designing or implementing embodied AI systems in areas like service robotics and industrial automation.
Generative Artificial Intelligence, Embodied AI, Autonomous Agents, Human-GenAI Collaboration
Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Understanding How Freelancers in the Design Domain Collaborate with Generative Artificial Intelligence

Fabian Helms, Lisa Gussek, and Manuel Wiesche
This study explores how generative AI (GenAI), specifically text-to-image generation (TTIG) systems, impacts the creative work of freelance designers. Through qualitative interviews with 10 designers, the researchers conducted a thematic analysis to understand the nuances of this new form of human-AI collaboration.

Problem While the impact of GenAI on creative fields is widely discussed, there is little specific research on how it affects freelance designers. This group is uniquely vulnerable to technological disruption due to their direct market exposure and lack of institutional support, creating an urgent need to understand how these tools are changing their work processes and job security.

Outcome - The research identified four key tradeoffs freelancers face when using GenAI: creativity can be enhanced (inspiration) but also risks becoming generic (standardization).
- Efficiency is increased, but this can be undermined by 'overprecision', a form of perfectionism where too much time is spent on minor AI-driven adjustments.
- The interaction with AI is viewed dually: either as a helpful 'sparring partner' for ideas or as an unpredictable tool causing a frustrating lack of control.
- For the future of work, GenAI is seen as forcing a job transition where designers must adapt new skills, while also posing a direct threat of job loss, particularly for junior roles.
Generative Artificial Intelligence, Online Freelancing, Human-AI collaboration, Freelance designers, Text-to-image generation, Creative process
The Impact of Digital Platform Acquisition on Firm Value: Does Buying Really Help?

The Impact of Digital Platform Acquisition on Firm Value: Does Buying Really Help?

Yongli Huang, Maximilian Schreieck, Alexander Kupfer
This study examines investor reactions to corporate announcements of digital platform acquisitions to understand their impact on firm value. Using an event study methodology on a global sample of 157 firms, the research analyzes how the stock market responds based on the acquisition's motivation (innovation-focused vs. efficiency-focused) and the target platform's maturity.

Problem While acquiring digital platforms is an increasingly popular corporate growth strategy, little is known about its actual effectiveness and financial impact. Companies and investors lack clear guidance on which types of platform acquisitions are most likely to create value, leading to uncertainty and potentially poor strategic decisions.

Outcome - Generally, the announcement of a digital platform acquisition leads to a negative stock market return, indicating investor concerns about integration risks and high costs.
- Acquisitions motivated by 'exploration' (innovation and new opportunities) face a less negative market reaction than those motivated by 'exploitation' (efficiency and optimization).
- Acquiring mature platforms with established user bases mitigates negative stock returns more effectively than acquiring nascent (new) platforms.
Digital Platform Acquisition, Event Study, Exploration vs. Exploitation, Mature vs. Nascent, Chicken-and-Egg Problem
The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

The GenAI Who Knew Too Little – Revisiting Transactive Memory Systems in Human GenAI Collaboration

Christian Meske, Tobias Hermanns, Florian Brachten
This study investigates how traditional models of team collaboration, known as Transactive Memory Systems (TMS), manifest when humans work with Generative AI. Through in-depth interviews with 14 knowledge workers, the research analyzes the unique dynamics of expertise recognition, trust, and coordination that emerge in these partnerships.

Problem While Generative AI is increasingly used as a collaborative tool, our understanding of teamwork is based on human-to-human interaction. This creates a knowledge gap, as the established theories do not account for an AI partner that operates on algorithms rather than social cues, potentially leading to inefficient and frustrating collaborations.

Outcome - Human-AI collaboration is asymmetrical: Humans learn the AI's capabilities, but the AI fails to recognize and remember human expertise beyond a single conversation.
- Trust in GenAI is ambivalent and requires verification: Users simultaneously see the AI as an expert yet doubt its reliability, forcing them to constantly verify its outputs, a step not typically taken with trusted human colleagues.
- Teamwork is hierarchical, not mutual: Humans must always take the lead and direct a passive AI that lacks initiative, creating a 'boss-employee' dynamic rather than a reciprocal partnership where both parties contribute ideas.
Generative AI, Transactive Memory Systems, Human-AI Collaboration, Knowledge Work, Trust in AI, Expertise Recognition, Coordination
The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

The Role of Generative AI in P2P Rental Platforms: Investigating the Effects of Timing and Interactivity on User Reliance in Content (Co-)Creation Processes

Niko Spatscheck, Myriam Schaschek, Christoph Tomitza, and Axel Winkelmann
This study investigates how Generative AI can best assist users on peer-to-peer (P2P) rental platforms like Airbnb in writing property listings. Through an experiment with 244 participants, the researchers tested how the timing of when AI suggestions are offered and the level of interactivity (automatic vs. user-prompted) influence how much a user relies on the AI.

Problem While Generative AI offers a powerful way to help property hosts create compelling listings, platforms don't know the most effective way to implement these tools. It's unclear if AI assistance is more impactful at the beginning or end of the writing process, or if users prefer to actively ask for help versus receiving it automatically. This study addresses this knowledge gap to provide guidance for designing better AI co-writing assistants.

Outcome - Offering AI suggestions earlier in the writing process significantly increases how much users rely on them.
- Allowing users to actively prompt the AI for assistance leads to a slightly higher reliance compared to receiving suggestions automatically.
- Higher cognitive load (mental effort) reduces a user's reliance on AI-generated suggestions.
- For businesses like Airbnb, these findings suggest that AI writing tools should be designed to engage users at the very beginning of the content creation process to maximize their adoption and impact.
Human-genAI collaboration, Co-writing, P2P rental platforms, Reliance, Generative AI, Cognitive Load
A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

A Framework for Context-Specific Theorizing on Trust and Reliance in Collaborative Human-AI Decision-Making Environments

Niko Spatscheck
This study analyzes 59 empirical research papers to understand why findings on human trust in AI have been inconsistent. It synthesizes this research into a single framework that identifies the key factors influencing how people decide to trust and rely on AI systems for decision-making. The goal is to provide a more unified and context-aware understanding of the complex relationship between humans and AI.

Problem Effective collaboration between humans and AI is often hindered because people either trust AI too much (overreliance) or too little (underreliance), leading to poor outcomes. Existing research offers conflicting explanations for this behavior, creating a knowledge gap for developers and organizations. This study addresses the problem that prior research has largely ignored the specific context—such as the user's expertise, the AI's design, and the nature of the task—which is crucial for explaining these inconsistencies.

Outcome - The study created a comprehensive framework that categorizes the factors influencing trust and reliance on AI into three main groups: human-related (e.g., user expertise, cognitive biases), AI-related (e.g., performance, explainability), and decision-related (e.g., risk, complexity).
- It concludes that trust is not static but is dynamically shaped by the interaction of these various contextual factors.
- This framework provides a practical tool for researchers and businesses to better predict how users will interact with AI and to design systems that foster appropriate levels of trust, leading to better collaborative performance.
AI Systems, Trust, Reliance, Collaborative Decision-Making, Human-AI Collaboration, Contextual Factors, Conceptual Framework
Successfully Organizing AI Innovation Through Collaboration with Startups

Successfully Organizing AI Innovation Through Collaboration with Startups

Jana Oehmichen, Alexander Schult, John Qi Dong
This study examines how established firms can successfully partner with Artificial Intelligence (AI) startups to foster innovation. Based on an in-depth analysis of six real-world AI implementation projects across two startups, the research identifies five key challenges and provides corresponding recommendations for navigating these collaborations effectively.

Problem Established companies often lack the specialized expertise needed to leverage AI technologies, leading them to partner with startups. However, these collaborations introduce unique difficulties, such as assessing a startup's true capabilities, identifying high-impact AI applications, aligning commercial interests, and managing organizational change, which can derail innovation efforts.

Outcome - Challenge 1: Finding the right AI startup. Firms should overcome the inscrutability of AI startups by assessing credible quality signals, such as investor backing, academic achievements of staff, and success in prior contests, rather than relying solely on product demos.
- Challenge 2: Identifying the right AI use case. Instead of focusing on data availability, companies should collaborate with startups in workshops to identify use cases with the highest potential for value creation and business impact.
- Challenge 3: Agreeing on commercial terms. To align incentives and reduce information asymmetry, contracts should include performance-based or usage-based compensation, linking the startup's payment to the value generated by the AI solution.
- Challenge 4: Considering the impact on people. Firms must manage user acceptance by carefully selecting the degree of AI autonomy, involving employees in the design process, and clarifying the startup's role to mitigate fears of job displacement.
- Challenge 5: Overcoming implementation roadblocks. Depending on the company's organizational maturity, it should either facilitate deep collaboration between the startup and all internal stakeholders or use the startup to build new systems that bypass internal roadblocks entirely.
Artificial Intelligence, AI Innovation, Corporate-startup collaboration, Open Innovation, Digital Transformation, AI Startups
Managing Where Employees Work in a Post-Pandemic World

Managing Where Employees Work in a Post-Pandemic World

Molly Wasko, Alissa Dickey
This study examines how a large manufacturing company navigated the challenges of remote and hybrid work following the COVID-19 pandemic. Through an 18-month case study, the research explores the impacts on different employee groups (virtual, hybrid, and on-site) and provides recommendations for managing a blended workforce. The goal is to help organizations, particularly those with significant physical operations, balance new employee expectations with business needs.

Problem The widespread shift to remote work during the pandemic created a major challenge for businesses deciding on their long-term workplace strategy. Companies are grappling with whether to mandate a full return to the office, go fully remote, or adopt a hybrid model. This problem is especially complex for industries like manufacturing that rely on physical operations and cannot fully digitize their entire workforce.

Outcome - Employees successfully adapted information and communication technology (ICT) to perform many tasks remotely, effectively separating their work from a physical location.
- Contrary to expectations, on-site workers who remained at the physical workplace throughout the pandemic reported feeling the most isolated, least valued, and dissatisfied.
- Despite demonstrated high productivity and employee desire for flexibility, business leaders still strongly prefer having employees co-located in the office, believing it is crucial for building and maintaining the company's core values.
- A 'Digital-Physical Intensity' framework was developed to help organizations classify jobs and make objective decisions about which roles are best suited for on-site, hybrid, or virtual work.
remote work, hybrid work, post-pandemic workplace, blended workforce, employee experience, digital transformation, organizational culture
Fueling Digital Transformation with Citizen Developers and Low-Code Development

Fueling Digital Transformation with Citizen Developers and Low-Code Development

Ainara Novales Rubén Mancha
This study examines how organizations can leverage low-code development platforms and citizen developers (non-technical employees) to accelerate digital transformation. Through in-depth case studies of two early adopters, Hortilux and Volvo Group, along with interviews from seven other firms, the paper identifies key strategies and challenges. The research provides five actionable recommendations for business leaders to successfully implement low-code initiatives.

Problem Many organizations struggle to keep pace with digital innovation due to a persistent shortage and high cost of professional software developers. This creates a significant bottleneck in application development, slowing down responsiveness to customer needs and hindering digital transformation goals. The study addresses how to overcome this resource gap by empowering business users to create their own software solutions.

Outcome - Set a clear strategy for selecting the right use cases for low-code development, starting with simple, low-complexity tasks like process automation.
- Identify, assign, and provide training to upskill tech-savvy employees into citizen developers, ensuring they have the support and guidance needed.
- Establish a dedicated low-code team or department to provide organization-wide support, training, and governance for citizen development initiatives.
- Ensure the low-code architecture is extendable, reusable, and up-to-date to avoid creating complex, siloed applications that are difficult to maintain.
- Evaluate the technical requirements and constraints of different solutions to select the low-code platform that best fits the organization's specific needs.
low-code development, citizen developers, digital transformation, IT strategy, application development, software development bottleneck, case study
Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Promoting Cybersecurity Information Sharing Across the Extended Value Chain

Olga Biedova, Lakshmi Goel, Justin Zhang, Steven A. Williamson, Blake Ives
This study analyzes an alternative cybersecurity information-sharing forum centered on the extended value chain of a single company in the forest and paper products industry. The paper explores the forum's design, execution, and challenges to provide recommendations for similar company-specific collaborations. The goal is to enhance cybersecurity resilience across interconnected business partners by fostering a more trusting and relevant environment for sharing best practices.

Problem As cyberthreats become more complex, industries with interconnected information and operational technologies (IT/OT) face significant vulnerabilities. Despite government and industry calls for greater collaboration, inter-organizational cybersecurity information sharing remains sporadic due to concerns over confidentiality, competitiveness, and lack of trust. Standard sector-based sharing initiatives can also be too broad to address the specific needs of a company and its unique value chain partners.

Outcome - A company-led, value-chain-specific cybersecurity forum is an effective alternative to broader industry groups, fostering greater trust and more relevant discussions among business partners.
- Key success factors for such a forum include inviting the right participants (security strategy leaders), establishing clear ground rules to encourage open dialogue, and using external facilitators to ensure neutrality.
- The forum successfully shifted the culture from one of distrust to one of transparency and collaboration, leading participants to be more open about sharing experiences, including previous security breaches.
- Participants gained valuable insights into the security maturity of their partners, leading to tangible improvements in cybersecurity practices, such as updating security playbooks, adopting new risk metrics, and enhancing third-party risk management.
- The collaborative model strengthens the entire value chain, as companies learn from each other's strategies, tools, and policies to collectively improve their defense against common threats.
cybersecurity, information sharing, extended value chain, supply chain security, cyber resilience, forest products industry, inter-organizational collaboration
Showing all 36 podcasts