This study investigates how to best measure IT competency on corporate boards of directors. Using a survey of 75 directors in Sri Lanka, the research compares the effectiveness of indirect 'proxy' measures (like prior work experience) against 'direct' measures (assessing specific IT knowledge and governance practices) in reflecting true board IT competency and its impact on IT governance.
Problem
Many companies struggle with poor IT governance, which is often blamed on a lack of IT competency at the board level. However, there is no clear consensus on what constitutes board IT competency or how to measure it effectively. Previous research has relied on various proxy measures, leading to inconsistent findings and uncertainty about how boards can genuinely improve their IT oversight.
Outcome
- Direct measures of IT competency are more accurate and reliable indicators than indirect proxy measures. - Boards with higher directly-measured IT competency demonstrate stronger IT governance. - Among proxy measures, having directors with work experience in IT roles or management is more strongly associated with good IT governance than having directors with formal IT training. - The study validates a direct measurement approach that boards can use to assess their competency gaps and take targeted steps to improve their IT governance capabilities.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business, technology, and Living Knowledge. I’m your host, Anna Ivy Summers.
Host: In a world driven by digital transformation, a company's success often hinges on its technology strategy. But who oversees that strategy at the highest level? The board of directors. Today, we’re unpacking a fascinating study from the Communications of the Association for Information Systems titled, "Unpacking Board-Level IT Competency."
Host: It investigates a critical question: how do we actually measure IT competency on a corporate board? Is it enough to have a former CIO on the team, or is there a better way? Here to guide us is our expert analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: So Alex, let's start with the big picture. What is the real-world problem this study is trying to solve?
Expert: The problem is that many companies have surprisingly poor IT governance. We see the consequences everywhere—data breaches, failed digital projects, and missed opportunities. Often, the blame is pointed at the board for not having enough IT savvy.
Host: But "IT savvy" sounds a bit vague. How have companies traditionally tried to measure this?
Expert: Exactly. That's the core issue. For years, research and board recruitment have relied on what this study calls 'proxy' measures. Think of it as looking at a resume: does a director have a computer science degree? Did they once work in an IT role? The problem is, these proxies have led to inconsistent and often contradictory findings about what actually improves IT oversight.
Host: It sounds like looking at a resume isn't telling the whole story. So, how did the researchers approach this differently?
Expert: They took a more direct route. They surveyed 75 board directors in Sri Lanka and compared those traditional proxy measures with 'direct' measures. Instead of just asking *if* a director had IT experience, they asked questions to gauge the board's *actual* collective knowledge and practices.
Host: What do you mean by direct measures? Can you give an example?
Expert: Certainly. A direct measure would assess the board's knowledge of the company’s specific IT risks, its IT budget, and its overall IT strategy. It also looks at governance mechanisms—things like, is IT a regular item on the meeting agenda? Does the board get independent assurance on cybersecurity risks? It measures what the board actively knows and does, not just what’s on paper.
Host: That makes perfect sense. So, when they compared the two approaches—the resume proxies versus the direct assessment—what were the key findings?
Expert: The results were quite clear. First, the direct measures of IT competency were found to be far more accurate and reliable indicators of a board's capability than any of the proxy measures.
Host: And did that capability translate into better performance?
Expert: It did. The second key finding was that boards with higher *directly-measured* IT competency demonstrated significantly stronger IT governance. This creates a clear link: a board that truly understands and engages with technology governs it more effectively.
Host: What about those traditional proxy measures? Was any of them useful at all?
Expert: That was another interesting finding. When they looked only at the proxies, having directors with practical work experience in IT management was a much better predictor of good governance than just having directors with a formal IT degree. Hands-on experience seems to matter more than academic training from years ago.
Host: Alex, this is the most important question for our listeners. What does this all mean for business leaders? What are the key takeaways?
Expert: I think there are three critical takeaways. First, stop just 'checking the box'. Appointing a director who had a tech role a decade ago might look good, but it's not a silver bullet. You need to assess the board's *current* and *collective* knowledge.
Host: So, how should a board do that?
Expert: That's the second takeaway: use a direct assessment. This study validates a method for boards to honestly evaluate their competency gaps. As part of an annual review, a board can ask: Do we understand the risks and opportunities of AI? Are we confident in our cybersecurity oversight? This allows for targeted improvements, like director training or more focused recruitment.
Host: You mentioned that competency is also about what a board *does*.
Expert: Absolutely, and that’s the third takeaway: build strong IT governance mechanisms. True competency isn't just knowledge; it's process. Simple actions like ensuring the Chief Information Officer regularly participates in board meetings or making technology a standard agenda item can massively increase the board’s capacity to govern effectively. It turns individual knowledge into a collective, strategic asset.
Host: So, to summarize: It’s not just about who is on the board, but what the board collectively knows and, crucially, what it does. Relying on resumes is not enough; boards need to directly assess their IT skills and build the processes to use them.
Expert: You've got it. It’s about moving from a passive, resume-based approach to an active, continuous process of building and applying IT competency.
Host: Fantastic insights. That’s all the time we have for today. Alex Ian Sutherland, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And a big thank you to our listeners for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping the future of business.
Board of Directors, Board IT Competency, IT Governance, Proxy Measures, Direct Measures, Corporate Governance
The Impact of Gamification on Cybersecurity Learning: Multi-Study Analysis
J.B. (Joo Baek) Kim, Chen Zhong, Hong Liu
This paper systematically assesses the impact of gamification on cybersecurity education through a four-semester, multi-study approach. The research compares learning outcomes between gamified and traditional labs, analyzes student perceptions and motivations using quantitative methods, and explores learning experiences through qualitative interviews. The goal is to provide practical strategies for integrating gamification into cybersecurity courses.
Problem
There is a critical and expanding cybersecurity workforce gap, emphasizing the need for more effective, practical, and engaging training methods. Traditional educational approaches often struggle to motivate students and provide the necessary hands-on, problem-solving skills required for the complex and dynamic field of cybersecurity.
Outcome
- Gamified cybersecurity labs led to significantly better student learning outcomes compared to traditional, non-gamified labs. - Well-designed game elements, such as appropriate challenges and competitiveness, positively influence student motivation. Intrinsic motivation (driven by challenge) was found to enhance learning outcomes, while extrinsic motivation (driven by competition) increased career interest. - Students found gamified labs more engaging due to features like instant feedback, leaderboards, clear step-by-step instructions, and story-driven scenarios that connect learning to real-world applications. - Gamification helps bridge the gap between theoretical knowledge and practical skills, fostering deeper learning, critical thinking, and a greater interest in pursuing cybersecurity careers.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: In a world of ever-growing digital threats, how can businesses train a more effective cybersecurity workforce? Today, we're diving into a fascinating multi-study analysis titled "The Impact of Gamification on Cybersecurity Learning." Host: This study systematically assesses how using game-like elements in training can impact learning, motivation, and even career interest in cybersecurity. Host: And to help us break it down, we have our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. What is the real-world problem this study is trying to solve? Expert: The problem is massive, and it's growing every year. It’s the cybersecurity workforce gap. The study cites a 2024 report showing the global shortage of professionals has expanded to nearly 4.8 million. Host: Almost 5 million people. That’s a staggering number. Expert: It is. And the core issue is that traditional educational methods often fail. They can be dry, theoretical, and they don't always build the practical, hands-on problem-solving skills needed to fight modern cyber threats. Companies need people who are not just knowledgeable, but also engaged and motivated. Host: So how did the researchers approach this challenge? How do you even begin to measure the impact of something like gamification? Expert: They used a really comprehensive mixed-method approach over four university semesters. It was essentially three studies in one. Host: Tell us about them. Expert: First, they directly compared the performance of students in gamified labs against those in traditional, non-gamified labs. They measured this with quizzes and final exam scores. Host: So, a direct A/B test on learning outcomes. Expert: Exactly. Second, they used quantitative surveys to understand the "why" behind the performance. They looked at what motivated the students – things like challenge, competition, and how that affected their learning and career interests. Host: And the third part? Expert: That was qualitative. The researchers conducted in-depth interviews with students to get rich, subjective feedback on their actual learning experience. They wanted to know what it felt like, in the students' own words. Host: So, after all that research, what were the key findings? Did making cybersecurity training a 'game' actually work? Expert: It worked, and in very specific ways. The first major finding was clear: students in the gamified labs achieved significantly better learning outcomes. Their scores were higher. Host: And the study gave some clues as to why? Expert: It did. This is the second key finding. Well-designed game elements had a powerful effect on motivation, but it's important to distinguish between two types. Host: Intrinsic and extrinsic? Expert: Precisely. Intrinsic motivation—the internal drive from feeling challenged and a sense of accomplishment—was found to directly enhance learning outcomes. Students learned the material better because they enjoyed the puzzle. Host: And extrinsic motivation? The external rewards? Expert: That’s things like leaderboards and points. The study found that this type of motivation, driven by competition, had a huge impact on increasing students' interest in pursuing a career in cybersecurity. Host: That’s a fascinating distinction. So one drives learning, the other drives career interest. What did the students themselves say made the gamified labs so much more engaging? Expert: From the interviews, three things really stood out. First, instant feedback. Knowing immediately if they solved a challenge correctly was highly rewarding. Second, the use of story-driven scenarios. It made the tasks feel like real-world problems, not just abstract exercises. And third, breaking down complex topics into clear, step-by-step instructions. It made difficult concepts much less intimidating. Host: This is all incredibly insightful. Let’s get to the bottom line: why does this matter for business? What are the key takeaways for leaders and managers? Expert: This is the most important part. For any business struggling with the cybersecurity skills gap, this study provides a clear, evidence-based path forward. Host: So, what’s the first step? Expert: Acknowledge that gamification is not just about making training 'fun'; it's a powerful tool for building your talent pipeline. By incorporating competitive elements, you can actively spark career interest and identify promising internal candidates you didn't know you had. Host: And for designing the training itself? Expert: The takeaway is that design is everything. Corporate training programs should use realistic, story-driven scenarios to bridge the gap between theory and practice. Provide instant feedback mechanisms and break down complex tasks into manageable challenges. This fosters deeper learning and real, applicable skills. Host: It sounds like it helps create the on-the-job experience that hiring managers are looking for. Expert: Exactly. Finally, businesses need to understand that motivation isn't one-size-fits-all. The most effective training programs will offer a blend of challenges that appeal to intrinsic learners and competitive elements that engage extrinsic learners. It’s about creating a rich, diverse learning environment. Host: Fantastic. So, to summarize for our listeners: the cybersecurity skills gap is a serious business threat, but this study shows that well-designed gamified training is a proven strategy to fight it. It improves learning, boosts both intrinsic and extrinsic motivation, and can directly help build a stronger talent pipeline. Host: Alex, thank you so much for breaking down this complex study into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains
Adnan Khan, Syed Hussain Murtaza, Parisa Maroufkhani, Sultan Sikandar Mirza
This study investigates how digital resilience enhances the adoption of AI and Internet of Things (IoT) practices within the supply chains of high-tech small and medium-sized enterprises (SMEs). Using survey data from 293 Chinese high-tech SMEs, the research employs partial least squares structural equation modeling to analyze the impact of these technologies on sustainable supply chain performance.
Problem
In an era of increasing global uncertainty and supply chain disruptions, businesses, especially high-tech SMEs, struggle to maintain stability and performance. There is a need to understand how digital technologies can be leveraged not just for efficiency, but to build genuine resilience that allows firms to adapt to and recover from shocks while maintaining sustainability.
Outcome
- Digital resilience is a crucial driver for the adoption of both IoT-oriented supply chain practices and AI-driven innovative practices. - The implementation of IoT and AI practices, fostered by digital resilience, significantly improves sustainable supply chain performance. - AI-driven practices were found to be particularly vital for resource optimization and predictive analytics, strongly influencing sustainability outcomes. - The effectiveness of digital resilience in promoting IoT adoption is amplified in dynamic and unpredictable market environments.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a fascinating new study titled "Digital Resilience in High-Tech SMEs: Exploring the Synergy of AI and IoT in Supply Chains."
Host: In simple terms, this study looks at how being digitally resilient helps smaller high-tech companies adopt AI and the Internet of Things, or IoT, in their supply chains, and what that means for their long-term sustainable performance. Here to break it all down is our analyst, Alex Ian Sutherland. Welcome, Alex.
Expert: Thanks for having me, Anna.
Host: Alex, let's start with the big picture. We hear a lot about supply chain disruptions. What is the specific problem this study is trying to solve?
Expert: The core problem is that global uncertainty is the new normal. We’ve seen it with the pandemic, with geopolitical conflicts, and even cybersecurity threats. These events create massive shocks to supply chains.
Host: And this is especially tough on smaller companies, right?
Expert: Exactly. High-tech Small and Medium-sized Enterprises, or SMEs, often lack the resources of larger corporations. They struggle to maintain stability and performance when disruptions hit. The old "just-in-time" model, which prioritized efficiency above all, proved to be very fragile. So, the question is no longer just about being efficient; it’s about being resilient.
Host: The study uses the term "digital resilience." What does that mean in this context?
Expert: Digital resilience is a company's ability to use technology not just to operate, but to absorb shocks, adapt to disruptions, and recover quickly. It’s about building a digital foundation that is fundamentally flexible and strong.
Host: So how did the researchers go about studying this? What was their approach?
Expert: They conducted a survey with 293 high-tech SMEs in China that were already using AI and IoT technologies in their supply chains. This is important because it means they were analyzing real-world applications, not just theories. They then used advanced statistical analysis to map out the connections between digital resilience, the use of AI and IoT, and overall performance.
Host: A practical approach for a practical problem. Let's get to the results. What were the key findings?
Expert: There were a few really powerful takeaways. First, digital resilience is the critical starting point. The study found that companies with a strong foundation of digital resilience were far more successful at implementing both IoT-oriented practices, like real-time asset tracking, and innovative AI-driven practices.
Host: So, resilience comes first, then the technology adoption. And does that adoption actually make a difference?
Expert: It absolutely does. That’s the second key finding. When that resilience-driven adoption of AI and IoT happens, it significantly boosts what the study calls sustainable supply chain performance. This isn't just about profits; it means the supply chain becomes more reliable, efficient, and environmentally responsible.
Host: Was there a difference in the impact between AI and IoT?
Expert: Yes, and this was particularly interesting. While both were important, the study found that AI-driven practices were especially vital for achieving those sustainability outcomes. This is because AI excels at things like resource optimization and predictive analytics—it can help a company see a problem coming and adjust before it hits.
Host: And what about the business environment? Does that play a role?
Expert: A huge role. The final key insight was that in highly dynamic and unpredictable markets, the value of digital resilience is amplified. Specifically, it becomes even more crucial for driving the adoption of IoT. When things are chaotic, the ability to get real-time data from IoT sensors and devices becomes a massive strategic advantage.
Host: This is where it gets really crucial for our listeners. If I'm a business leader, what is the main lesson I should take from this study?
Expert: The single most important takeaway is to shift your mindset. Stop viewing digital tools as just a way to cut costs or improve efficiency. Start viewing them as the core of your company's resilience strategy. It’s not about buying software; it's about building the strategic capability to anticipate, respond, and recover from shocks.
Host: So it's about moving from a defensive posture to an offensive one?
Expert: Precisely. IoT gives you unprecedented, real-time visibility across your entire supply chain. You know where your materials are, you can monitor production, you can track shipments. Then, AI takes that firehose of data and turns it into intelligent action. It helps you make smarter, predictive decisions. The combination creates a supply chain that isn't just tough—it's intelligent.
Host: So, in today's unpredictable world, this isn't just a nice-to-have, it's a competitive necessity.
Expert: It is. In a volatile market, the ability to adapt faster than your competitors is what separates the leaders from the laggards. For an SME, leveraging AI and IoT this way can level the playing field, allowing them to be just as agile, if not more so, than much larger rivals.
Host: Fantastic insights. To summarize for our audience: Building a foundation of digital resilience is the key first step. This resilience enables the powerful adoption of AI and IoT, which in turn drives a stronger, smarter, and more sustainable supply chain. And in our fast-changing world, that capability is what truly defines success.
Host: Alex Ian Sutherland, thank you so much for your time and for making this research so accessible.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning in to A.I.S. Insights — powered by Living Knowledge. We'll see you next time.
Digital Resilience, Internet of Things-Oriented Supply Chain Management Practices, AI-Driven Innovative Practices, Supply Chain Dynamism, Sustainable Supply Chain Performance
How Verizon Media Built a Cybersecurity Culture
Keri Pearlson, Josh Schwartz, Sean Sposito, Masha Arbisman
This case study examines how Verizon Media's security organization, known as “The Paranoids,” successfully built a strong cybersecurity culture across its 20,000 employees. The study details the formation and strategy of the Proactive Engagement (PE) Group, which used a data-driven, three-step process involving behavioral goals, metrics, and targeted actions to change employee behavior. This approach moved beyond traditional training to create lasting cultural change.
Problem
Human error is a primary cause of cybersecurity breaches, with reports indicating it's involved in up to 85% of incidents. Standard cybersecurity awareness training is often insufficient because employees fail to prioritize security or find security protocols cumbersome. This creates a significant gap where organizations remain vulnerable despite technical defenses, highlighting the need for a deeper cultural shift to make security an ingrained value.
Outcome
- The rate of employees having their credentials captured in phishing simulations was cut in half. - The number of accurately reported phishing attempts by employees doubled. - The usage of the corporate password manager tripled across the company. - The initiative successfully shifted the organizational mindset by using transparent dashboards, positive reinforcement, and practical tools rather than relying solely on awareness campaigns. - The study provides a replicable framework for other organizations to build a security culture by focusing on changing values and beliefs, not just actions.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating case study that tackles one of the biggest challenges in the digital age: cybersecurity. Host: The study is titled "How Verizon Media Built a Cybersecurity Culture," and it details how their security team, known as “The Paranoids,” successfully embedded security into the DNA of its 20,000 employees. With me is our expert analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. Why is a study like this so important? What's the fundamental problem that companies are facing? Expert: The problem is the human element. We can build the best digital firewalls, but people are often the weakest link. The study cites data showing human error is involved in up to 85% of cybersecurity breaches. Host: Eighty-five percent is a staggering number. But don't most companies have mandatory security training? Expert: They do, but standard training often isn't enough. The study points out that employees are busy trying to do their jobs efficiently. Security protocols can feel cumbersome, so unless security is a deeply ingrained value, it gets forgotten or bypassed. This creates a huge vulnerability gap. Host: So it's less about a lack of knowledge and more about a lack of cultural priority. How did Verizon Media's team, "The Paranoids," approach this differently? Expert: Instead of just another awareness campaign, they created a special team called the Proactive Engagement Group. Their approach was methodical and data-driven, almost like a science experiment in behavior change. Expert: It was a three-step process. First, they defined very specific, desired behaviors—not vague advice like "don't click on suspicious links." Second, they established clear metrics to measure those behaviors and create a baseline. And third, they took targeted actions to change the behavior, measured the results, and then adjusted their approach continuously. Host: It sounds much more active than just a yearly training video. Did this data-driven approach actually work? What were the results? Expert: The results were impressive. Over a two-year period, they cut the rate of employees having their credentials captured in phishing simulations in half. Host: That alone is a huge win. What else? Expert: They also doubled the number of accurately reported phishing attempts by employees, which means people were getting much better at spotting threats. And perhaps most telling, the usage of their corporate password manager tripled across the company. Host: Tripling the use of a key security tool is a massive behavioral shift. How did they achieve that? Was it just mandatory? Expert: That’s the most interesting part—it wasn't just about mandates. They used what the study calls "choice architecture." For example, they pre-installed the password manager browser extension on every corporate device, making it the easiest default option. Expert: They also used positive reinforcement and incentivization. They created a "Password Manager Knight" award, complete with branded merchandise like hoodies and stickers. It made security cool and created a sense of positive competition, rather than just being a chore. Host: I love that. Turning security into something aspirational. So, Alex, this is the crucial part for our listeners. What is the key takeaway for other business leaders? Why does this matter for them? Expert: The biggest takeaway is that cybersecurity is as much a people-management issue as it is a technology issue. You can't just set a policy and expect change. You have to actively shape the culture. Host: And how do you do that? Expert: First, measure what matters and be transparent. The Paranoids used dashboards that allowed managers and even individual employees to see their security performance. This transparency drove accountability and friendly competition without public shaming. Expert: Second, focus on positive reinforcement over punishment. The study emphasizes they didn't want to embarrass employees. They celebrated successes, which motivated people far more effectively than calling out failures. Expert: And finally, a really smart move was extending security into employees' personal lives. They offered employees a free license for the password manager for their personal use. This showed the company genuinely cared about their well-being, which in turn built trust and drove adoption of secure practices at work. Host: That’s a powerful insight—caring for the whole person, not just the employee. Host: So to summarize, the old model of simple security awareness training is broken. The Verizon Media case study shows that a successful strategy treats cybersecurity as a cultural mission. Host: It requires defining clear behaviors, using data and transparency to track progress, and leveraging positive reinforcement to change attitudes and beliefs, not just actions. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thanks to all of you for listening to A.I.S. Insights, powered by Living Knowledge. Join us next time as we decode another key study from the world of business and technology.
Applying the Lessons from the Equifax Cybersecurity Incident to Build a Better Defense
Ilya Kabanov, Stuart Madnick
This study provides an in-depth analysis of the 2017 Equifax data breach, which affected 148 million people. Using the Cybersafety method, the authors reconstructed the attack flow and Equifax's hierarchical safety control system to identify systemic failures. Based on this analysis, the paper offers recommendations for managers to strengthen their organization's cybersecurity.
Problem
Many organizations miss the opportunity to learn from major cybersecurity incidents because analyses often focus on a single, direct cause rather than addressing deeper, systemic root causes. This paper addresses that gap by systematically investigating the Equifax breach to provide transferable lessons that can help other organizations prevent similar catastrophic failures.
Outcome
- The breach was caused by 19 systemic failures across four hierarchical levels: technical controls (e.g., expired certificates), IT/Security teams, management and the board, and external regulators. - Critical technical breakdowns included an expired SSL certificate that blinded the intrusion detection system for nine months and vulnerability scans that failed to detect the known Apache Struts vulnerability. - Organizational shortcomings were significant, including a reactive patching process, poor communication between siloed IT and security teams, and a failure by management to prioritize critical security upgrades. - The board of directors failed to establish an appropriate risk appetite, prioritizing business growth over information security, which led to a culture where security was under-resourced. - The paper offers 11 key recommendations for businesses, such as limiting sensitive data retention, embedding security into software design, ensuring executive leadership has a say in cybersecurity decisions, and fostering a shared sense of responsibility for security across the organization.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. Today we're looking at a crucial study titled "Applying the Lessons from the Equifax Cybersecurity Incident to Build a Better Defense." Host: It’s an in-depth analysis of the massive 2017 data breach that affected 148 million people. To help us understand its lessons, we have our analyst, Alex Ian Sutherland. Host: Alex, welcome. This study goes far beyond just recounting what happened, doesn't it? Expert: It certainly does, Anna. The researchers used a framework called the Cybersafety method to reconstruct the attack and analyze Equifax's entire safety control system. The goal was to uncover the deep, systemic failures to offer recommendations any manager can use to strengthen their organization's cybersecurity. Host: Let's start with the big problem the study addresses. After a breach of that magnitude, don't companies already conduct thorough post-mortems? Expert: They do, but often they focus on a single, direct cause—like an unpatched server. They treat the symptom, not the disease. Expert: The study argues that this prevents real learning. The core problem is that organizations miss the opportunity to find and fix the deeper, systemic root causes that made the disaster possible in the first place. Host: So how did this study dig deeper to find those root causes? What is this Cybersafety method? Expert: Think of it like a full-scale accident investigation for a plane crash. The researchers reconstructed the attack step-by-step. Then, they mapped out what they call a "hierarchical safety control structure." Expert: That means they analyzed everything from the technical firewalls, to the IT and security teams, all the way up to senior management and the Board of Directors. It let them see not just *what* failed, but *why* it failed at every single level. Host: And what did this multi-level investigation find? I understand the results were quite shocking. Expert: They were. The study identified 19 distinct systemic failures. It was a cascade of errors. A critical technical failure was a single expired SSL certificate. Host: What does that mean in simple terms? Expert: That certificate was needed for their intrusion detection system to inspect network traffic. Because it had expired, the system was effectively blind for nine months. Attackers were in the network, stealing data, and the digital security guard couldn't see a thing. Host: Blind for nine months. That's incredible. And this was just one of 19 failures? Expert: Yes. The next level of failure was organizational. The IT and security teams were siloed and didn't communicate well. Security knew about the critical software vulnerability two months before the breach started, but the vulnerability scan failed to detect it, and the message never got to the team responsible for that specific system. Host: So even with the right information, the process was broken. What about the leadership level? Expert: That's where the failures were most profound. Management consistently failed to prioritize critical security upgrades, favoring other business initiatives. The study shows the Board of Directors was also at fault. They fostered a culture focused on business growth above all else and failed to establish an appropriate risk appetite for information security. Host: This is the critical part for our audience. What are the key business takeaways? How can other companies avoid the same fate? Expert: The study provides some powerful recommendations. The first big takeaway is to build "defense in depth." This means having multiple layers of security. For instance, limit the sensitive data you retain—you can't steal what isn't there. And embed security into software design from the very beginning, don't just bolt it on at the end. Host: That’s a great technical point. What about the cultural and organizational side? Expert: That’s the second key takeaway: security must be a shared responsibility. It can't just be the IT department's problem. The study recommends ensuring executive leadership has a direct say in cybersecurity decisions. At Equifax, the Chief Security Officer didn't even report to the CEO. Security needs a real seat at the leadership table. Host: So it’s a culture shift, driven from the top. Is there a final lesson specifically for boards? Expert: Absolutely. The board must fully analyze and communicate the organization's cybersecurity risk appetite. They need to understand that de-prioritizing a security upgrade isn't just a budget choice; it's what the study calls a "semiconscious decision" to accept a potentially billion-dollar risk. That trade-off needs to be explicit and conscious. Host: So, to summarize, the Equifax breach wasn't just a technical error. It was a systemic failure of process, culture, management, and governance. Host: The lessons for every business are to build layered technical defenses, make security a shared cultural value, and ensure the board is actively defining and overseeing cyber risk. Host: Alex, thank you for distilling this complex study into such clear, actionable insights. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights, powered by Living Knowledge. Join us next time as we translate more cutting-edge research into business reality.
cybersecurity, data breach, Equifax, risk management, incident analysis, IT governance, systemic failure
Learning from Enforcement Cases to Manage GDPR Risks
Saeed Akhlaghpour, Farkhondeh Hassandoust, Farhad Fatehi, Andrew Burton-Jones, Andrew Hynd
This study analyzes 93 enforcement cases of the European Union's General Data Protection Regulation (GDPR) to help organizations better manage compliance risks. The research identifies 12 distinct types of risks, their associated mitigation measures, and key risk indicators. It provides a practical, evidence-based framework for businesses to move beyond a simple checklist approach to data privacy.
Problem
The GDPR is a complex and globally significant data privacy law, and noncompliance can lead to severe financial penalties. However, its requirement for a 'risk-based approach' can be ambiguous for organizations, leaving them unsure of where to focus their compliance efforts. This study addresses this gap by analyzing real-world fines to provide clear, actionable guidance on the most common and costly compliance pitfalls.
Outcome
- The analysis of 93 GDPR enforcement cases identified 12 distinct risk types across three main areas: organizational practices, technology, and data management. - Common organizational risks include failing to obtain valid user consent, inadequate data breach reporting, and a lack of due diligence in mergers and acquisitions. - Key technology risks involve inadequate technical safeguards (e.g., weak encryption), improper video surveillance, and unlawful automated decision-making or profiling. - Data management risks focus on failures in providing data access, minimizing data collection, limiting data storage periods, and ensuring data accuracy. - The study proposes four strategic actions for executives: adopt a risk-based approach globally, monitor the evolving GDPR landscape, use enforcement evidence to justify compliance investments, and strategically select a lead supervisory authority.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into the world of data privacy, a topic that’s on every executive’s mind. We'll be looking at a study from MIS Quarterly Executive called "Learning from Enforcement Cases to Manage GDPR Risks". Host: It analyzes 93 real-world cases to give organizations a practical, evidence-based framework for managing compliance risks, moving them beyond a simple checklist. Host: To help us unpack this is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. The GDPR is this huge, complex privacy law, and the penalties for getting it wrong are massive. Why is this such a major headache for businesses? Expert: It really comes down to ambiguity. The law requires a ‘risk-based approach,’ but it doesn't give you a clear blueprint. Businesses know the fines can be huge—up to 4% of their global annual turnover—but they’re often unsure where to focus their efforts to avoid those fines. Expert: They're left wondering what the real-world mistakes are that regulators are actually punishing. This study sought to answer exactly that question. Host: So, it’s about finding a clear path through the fog. How did the researchers provide that clarity? What was their approach? Expert: It was very practical. Instead of just interpreting the legal text, they analyzed 93 actual enforcement cases across 23 EU countries where companies were fined. We're talking about nearly 140 million euros in total penalties. Expert: By studying these real-world failures, they were able to map out the most common and costly compliance pitfalls. Essentially, they created a guide based on the evidence of what gets companies into trouble. Host: Learning from others' mistakes seems like a smart strategy. What were some of the biggest tripwires the study uncovered? Expert: The researchers grouped them into 12 distinct risk types across three main areas. The first is 'Organizational Practices'. This is where we saw some of the biggest fines. Expert: For example, Google was fined 50 million euros in France for not getting valid user consent for ad personalization. The consent process was too vague and not specific enough for each purpose. Host: That’s a huge penalty for a consent issue. What about the other areas? Expert: The second area is 'Technology Risks'. A key failure here is having inadequate technical safeguards. The study highlights the British Airways case, where hackers stole data from 500,000 customers by modifying just 22 lines of code on their website. The initial fine proposed was massive because of that technical vulnerability. Host: So even a small crack in the technical armor can lead to a huge breach. What was the third area? Expert: The third is 'Data Management Risks'. This covers the fundamentals, like not keeping data longer than you need it. A German real estate company, for instance, was fined 14.5 million euros for storing tenants' personal data for longer than was legally necessary. Host: These examples really bring the risks to life. Based on these findings, what are the key strategic takeaways for business leaders listening today? Expert: The study proposes four strategic actions. First, adopt this risk-based approach globally. Don't just see GDPR as an EU problem. Applying its principles to all your customers simplifies your processes and builds trust. Expert: Second, you have to constantly monitor the GDPR landscape. Compliance is not a one-time project; it’s an ongoing process as enforcement evolves. Host: That makes sense. What are the other two? Expert: Third, and this is critical for getting internal buy-in, use this enforcement evidence to justify compliance investments. It’s much easier to get budget for a new security tool when you can point to a multi-million-euro fine that could have been prevented. Expert: And finally, for multinational companies, be strategic in choosing your lead supervisory authority in the EU. The study notes that different countries' regulators have different enforcement styles. Picking the right one can be a significant strategic decision. Host: Fantastic insights, Alex. So, to recap for our listeners: GDPR compliance is complex, but this study shows we can create a clear roadmap by learning from real enforcement cases. Host: The key is to move beyond a simple checklist and focus on the major risk areas that regulators are targeting, like user consent, technical security, and data retention policies. Host: And the big strategic actions are to think globally, stay updated, use real-world cases to drive investment, and be smart about your regulatory relationships. Host: Alex Ian Sutherland, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thank you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time for more data-driven takeaways for your business.
GDPR, Data Privacy, Risk Management, Data Protection, Compliance, Enforcement Cases, Information Security
Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development
Ersin Dincelli, Haadi Jafarian
This study explores how an 'agentic metaverse'—an immersive virtual world powered by intelligent AI agents—can be used for cybersecurity training. The researchers presented an AI-driven metaverse prototype to 53 cybersecurity professionals to gather qualitative feedback on its potential for transforming workforce development.
Problem
Traditional cybersecurity training methods, such as classroom instruction and static online courses, are struggling to keep up with the fast-evolving threat landscape and high demand for skilled professionals. These conventional approaches often lack the realism and adaptivity needed to prepare individuals for the complex, high-pressure situations they face in the real world, contributing to a persistent skills gap.
Outcome
- The concept of an AI-driven agentic metaverse for training was met with strong enthusiasm, with 92% of professionals believing it would be effective for professional training. - Key challenges to implementing this technology include significant infrastructure demands, the complexity of designing realistic AI-driven scenarios, ensuring security and privacy, and managing user adoption. - The study identified five core challenges: infrastructure, multi-agent scenario design, security/privacy, governance of social dynamics, and change management. - Six practical recommendations are provided for organizations to guide implementation, focusing on building a scalable infrastructure, developing realistic training scenarios, and embedding security, privacy, and safety by design.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study titled "Exploring the Agentic Metaverse's Potential for Transforming Cybersecurity Workforce Development." With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: This study sounds like it’s straight out of science fiction. Can you break it down for us? What exactly is an ‘agentic metaverse’ and how does it relate to cybersecurity training? Expert: Absolutely. Think of it as a super-smart, immersive virtual world. The 'metaverse' part is the 3D, interactive environment, like a sophisticated simulation. The 'agentic' part means it's populated by intelligent AI agents that can think, adapt, and act on their own to create dynamic training scenarios. Host: So, we're talking about a virtual reality training ground run by AI. Why is this needed? What's wrong with how we train cybersecurity professionals right now? Expert: That’s the core of the problem the study addresses. The cyber threat landscape is evolving at an incredible pace. Traditional methods, like classroom lectures or static online courses, just can't keep up. Host: They’re too slow? Expert: Exactly. They lack realism and the ability to adapt. Real cyber attacks are high-pressure, collaborative, and unpredictable. A multiple-choice quiz doesn’t prepare you for that. This contributes to a massive global skills gap and high burnout rates among professionals. We need a way to train for the real world, in a safe environment. Host: So how did the researchers actually test this idea of an agentic metaverse? Expert: They built a functional prototype. It was an AI-driven, 3D environment that simulated cybersecurity incidents. They then presented this prototype to a group of 53 experienced cybersecurity professionals to get their direct feedback. Host: They let the experts kick the tires, so to speak. Expert: Precisely. The professionals could see firsthand how AI agents could play the role of attackers, colleagues, or even mentors, creating quests and scenarios that adapt in real-time based on the trainee's actions. It makes abstract threats feel tangible and urgent. Host: And what was the verdict from these professionals? Were they impressed? Expert: The response was overwhelmingly positive. A massive 92% of them believed this approach would be effective for professional training. They highlighted how engaging and realistic the scenarios felt, calling it a "great learning tool." Host: That’s a strong endorsement. But I imagine it’s not all smooth sailing. What are the hurdles to actually implementing this in a business? Expert: You're right. The enthusiasm was matched with a healthy dose of pragmatism. The study identified five core challenges for businesses to consider. Host: And what are they? Expert: First, infrastructure. Running a persistent, immersive 3D world with multiple AIs is computationally expensive. Second is scenario design. Creating AI-driven narratives that are both realistic and effective for learning is incredibly complex. Host: That makes sense. It's not just programming; it's like directing an intelligent, interactive movie. Expert: Exactly. The other key challenges were ensuring security and privacy within the training environment itself, managing the social dynamics in an immersive world, and finally, the big one: change management and user adoption. There's a learning curve, especially for employees who aren't gamers. Host: This is the crucial question for our listeners, Alex. Given those challenges, why should a business leader care? What are the practical takeaways here? Expert: This is where the study provides a clear roadmap. The biggest takeaway is that this technology can create a hyper-realistic, safe space for your teams to practice against advanced threats. It's like a flight simulator for cyber defenders. Host: So it moves training from theory to practice. Expert: It’s a complete shift. The AI agents can simulate anything from a phishing attack to a nation-state adversary, adapting their tactics based on your team's response. This allows you to identify skills gaps proactively and build real muscle memory for crisis situations. Host: What's the first step for a company that finds this interesting? Expert: The study recommends starting with small, focused pilot programs. Don't try to build a massive corporate metaverse overnight. Target a specific, high-priority training need, like incident response for a junior analyst team. Measure the results, prove the value, and then scale. Host: And it’s crucial to involve more than just the IT department, right? Expert: Absolutely. This has to be a cross-functional effort. You need your cybersecurity experts, your AI developers, your instructional designers from HR, and legal to think about privacy from day one. It's about building a scalable, secure, and truly effective training ecosystem. The payoff is a more resilient and adaptive workforce. Host: A fascinating look into the future of professional development. So, to sum it up: traditional cybersecurity training is falling behind. The 'agentic metaverse' offers a dynamic, AI-powered solution that’s highly realistic and engaging. While significant challenges in infrastructure and design exist, the potential to effectively close the skills gap is immense. Host: Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.
Agentic Metaverse, Cybersecurity Training, Workforce Development, AI Agents, Immersive Learning, Virtual Reality, Training Simulation
Corporate Governance for Digital Responsibility: A Company Study
Anna-Sophia Christ
This study examines how ten German companies translate the principles of Corporate Digital Responsibility (CDR) into actionable practices. Using qualitative content analysis of public data, the paper analyzes these companies' approaches from a corporate governance perspective to understand their accountability structures, risk regulation measures, and overall implementation strategies.
Problem
As companies rapidly adopt digital technologies for productivity gains, they also face new and complex ethical and societal responsibilities. A significant gap exists between the high-level principles of Corporate Digital Responsibility (CDR) and their concrete operationalization, leaving businesses without clear guidance on how to manage digital risks and impacts effectively.
Outcome
- The study identified seventeen key learnings for implementing Corporate Digital Responsibility (CDR) through corporate governance. - Companies are actively bridging the gap from principles to practice, often adapting existing governance structures rather than creating entirely new ones. - Key implementation strategies include assigning central points of contact for CDR, ensuring C-level accountability, and developing specific guidelines and risk management processes. - The findings provide a benchmark and actionable examples for practitioners seeking to integrate digital responsibility into their business operations.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: In today's digital-first world, companies are not just judged on their products, but on their principles. That brings us to our topic: Corporate Digital Responsibility. Host: We're diving into a study titled "Corporate Governance for Digital Responsibility: A Company Study", which examines how ten German companies are turning the idea of digital responsibility into real-world action. Host: To help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, Alex, let's start with the big picture. What is the core problem this study is trying to solve? Expert: The problem is a classic "say-do" gap. Companies everywhere are embracing digital technologies to boost productivity, which is great. But this creates new ethical and societal challenges. Host: You mean things like data privacy, the spread of misinformation, or the impact of AI? Expert: Exactly. And while many companies talk about being digitally responsible, there's a huge gap between those high-level principles and what actually happens on the ground. Businesses are often left without a clear roadmap on how to manage these digital risks effectively. Host: So they know they *should* be responsible, but they don't know *how*. How did the researchers approach this? Expert: They took a very practical approach. They didn't just theorize; they looked at what ten pioneering German companies from different industries—like banking, software, and e-commerce—are actually doing. Expert: They conducted a deep analysis of these companies' public documents: annual reports, official guidelines, company websites. They analyzed all this information through a corporate governance lens to map out the real structures and processes being used to manage digital responsibility. Host: So, looking under the hood at the leaders to see what works. What were some of the key findings? Expert: One of the most interesting findings was that companies aren't necessarily reinventing the wheel. They are actively adapting their existing governance structures rather than creating entirely new ones for digital responsibility. Host: That sounds very practical. They're integrating it into the machinery they already have. Expert: Precisely. And a critical part of that integration is assigning clear accountability. The study found that successful implementation almost always involves C-level ownership. Host: Can you give us an example? Expert: Absolutely. At some companies, like Deutsche Telekom, the accountability for digital responsibility reports directly to the CEO. In others, it lies with the Chief Digital Officer or a dedicated corporate responsibility department. The key is that it’s a senior-level concern, signaling that it’s a strategic priority, not just a compliance task. Host: So top-level buy-in is non-negotiable. What other strategies did you see? Expert: The study highlighted the importance of making responsibility tangible. This includes creating a central point of contact, like a "Digital Coordinator." It also involves developing specific guidelines, like Merck's 'Code of Digital Ethics' or Telefónica's 'AI Code of Conduct', which give employees clear rules of the road. Host: This is where it gets really important for our listeners. Let’s talk about the bottom line. Why does this matter for business leaders, and what are the key takeaways? Expert: The most crucial takeaway is that there is now a benchmark. Businesses don't have to start from scratch anymore. The study identified seventeen key learnings that effectively form a model for implementing digital responsibility. Host: It’s a roadmap they can follow. Expert: Exactly. It covers everything from getting official C-level commitment to establishing an expert group to handle tough decisions, and even implementing specific risk checks for new digital projects. It provides actionable examples. Host: What's another key lesson? Expert: That this is a strategic issue, not just a risk-management one. The companies leading the way see Corporate Digital Responsibility, or CDR, as fundamental to building trust with customers, employees, and society. It's about proactively defining 'how we want to behave' in the digital age, which is essential for long-term viability. Host: So, if a business leader listening right now wants to take the first step, what would you recommend based on this study? Expert: The simplest, most powerful first step is to assign clear ownership. Create that central point of contact. It could be a person or a cross-functional council. Once someone is accountable, they can begin to use the examples from the study to develop guidelines, build awareness, and integrate digital responsibility into the company’s DNA. Host: That’s a very clear call to action. Define ownership, use this study as a guide, and ensure you have leadership support. Host: To summarize for our listeners: as digital transformation accelerates, so do our responsibilities. This study shows that the gap between principles and practice can be closed. Host: The key is to embed digital responsibility into your existing corporate governance, ensure accountability at the highest levels, and create concrete rules and roles to guide your organization. Host: Alex Ian Sutherland, thank you for breaking down these insights for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights — powered by Living Knowledge.
Corporate Digital Responsibility, Corporate Governance, Digital Transformation, Principles-to-Practice, Company Study
Agile design options for IT organizations and resulting performance effects: A systematic literature review
Oliver Hohenreuther
This study provides a comprehensive framework for making IT organizations more adaptable by systematically reviewing 57 academic papers. It identifies and categorizes 20 specific 'design options' that companies can implement to increase agility. The research consolidates fragmented literature to offer a structured overview of these options and their resulting performance benefits.
Problem
In the fast-paced digital age, traditional IT departments often struggle to keep up with market changes and drive business innovation. While the need for agility is widely recognized, business leaders lack a clear, consolidated guide on the practical options available to restructure their IT organizations and a clear understanding of the specific performance outcomes of each choice.
Outcome
- Identified and structured 20 distinct agile design options (DOs) for IT organizations. - Clustered these options into four key dimensions: Processes, Structure, People & Culture, and Governance. - Mapped the specific performance effects for each design option, such as increased delivery speed, improved business-IT alignment, greater innovativeness, and higher team autonomy. - Created a foundational framework to help managers make informed, cost-benefit decisions when transforming their IT organizations.
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge to your business. I’m your host, Anna Ivy Summers. Host: Today, we’re joined by our expert analyst, Alex Ian Sutherland, to unpack a fascinating piece of research. Expert: Great to be here, Anna. Host: We're looking at a study titled “Agile design options for IT organizations and resulting performance effects: A systematic literature review”. In a nutshell, it provides a comprehensive framework for making IT organizations more adaptable by identifying 20 specific 'design options' companies can use. Expert: Exactly. It consolidates a lot of fragmented knowledge into one structured guide. Host: So, let’s start with the big problem. Why does a business leader need a guide like this? What's broken with traditional IT? Expert: The problem is speed and responsiveness. In today's fast-paced digital world, traditional IT departments often struggle. They were built for stability, not speed. The study notes they can be reactive and service-oriented, which means they become a bottleneck, slowing down innovation instead of driving it. Host: So the business wants to launch a new digital product or respond to a competitor, but IT can't keep up? Expert: Precisely. Business leaders know they need more agility, but they often lack a clear roadmap. They're left wondering, "What are our actual options for restructuring IT, and what results can we expect from each choice?" Host: That makes sense. So, how did the researchers build this roadmap? What was their approach? Expert: They conducted what’s called a systematic literature review. Think of it less like running a new experiment and more like expert detective work. They meticulously analyzed 57 different academic studies published on this topic. Host: So they synthesized the best ideas that are already out there? Expert: That's right. By reviewing this huge body of work, they were able to identify, categorize, and structure the most effective, recurring strategies that companies use to make their IT organizations truly agile. Host: And what were the key findings from this detective work? What did they uncover? Expert: The headline finding is the identification of 20 distinct agile 'design options'. But more importantly, they clustered these options into four key dimensions that any business leader can understand: Processes, Structure, People & Culture, and Governance. Host: Okay, four dimensions. Can you give us an example from one or two of them? Expert: Absolutely. Let's take 'Structure'. One design option is called ‘BizDevOps’. This is about breaking down the silos and integrating the business teams directly with the development and operations teams. The performance effect? You get much better alignment, faster knowledge exchange, and a stronger focus on the customer from end to end. Host: I can see how that would make a huge difference. What about another one, say, 'People & Culture'? Expert: A key option there is fostering 'T-shaped skills'. This means encouraging employees to have deep expertise in one area—the vertical bar of the T—but also a broad base of general knowledge about other areas—the horizontal bar. This creates incredible flexibility. People can move between teams and projects more easily, which boosts the entire organization's ability to react to change. Host: That's a powerful concept. This brings us to the most important question, Alex. Why does this matter for the business professionals listening to us right now? What are the practical takeaways? Expert: The biggest takeaway is that this study provides a menu, not a rigid recipe. There is no one-size-fits-all solution for agility. A leader can use these four dimensions—Processes, Structure, People & Culture, and Governance—as a diagnostic tool. Host: So you can assess your own organization against this framework? Expert: Exactly. You can see where your biggest pains are. Are your processes too slow? Is your structure too siloed? Then you can look at the specific design options in the study and see a curated list of potential solutions and, crucially, the performance benefits linked to each one, like increased delivery speed or better innovativeness. Host: It sounds like a strategic toolkit for transformation. Expert: It is. And the research makes a final, critical point: these options are not standalone fixes. They need to be combined thoughtfully. For example, adopting a 'decentralized decisions' model under Governance won't work unless you’ve also invested in the T-shaped skills and agile values under People & Culture. It’s about creating a coherent system. Host: A fantastic summary, Alex. It seems this research provides a much-needed, practical guide for any leader looking to turn their IT department from a cost center into a true engine for growth. Host: So, to recap: Traditional IT is often too slow for the digital age. This study reviewed decades of research to create a framework of 20 design options, grouped into four clear dimensions: Processes, Structure, People & Culture, and Governance. For business leaders, it's a practical toolkit to diagnose issues and choose the right combination of changes to build a truly agile organization. Host: Alex, thank you so much for breaking that down for us. Expert: My pleasure, Anna. Host: And thanks to all of you for listening to A.I.S. Insights — powered by Living Knowledge. Join us next time for more actionable intelligence.
Agile IT organization design, agile design options, agility benefits
Algorithmic Management: An MCDA-Based Comparison of Key Approaches
Arne Jeppe, Tim Brée, and Erik Karger
This study employs Multi-Criteria Decision Analysis (MCDA) to evaluate and compare four distinct approaches for governing algorithmic management systems: principle-based, rule-based, risk-based, and auditing-based. The research gathered preferences from 27 experts regarding each approach's effectiveness, feasibility, adaptability, and stakeholder acceptability to determine the most preferred strategy.
Problem
As organizations increasingly use algorithms to manage workers, they face the challenge of governing these systems to ensure fairness, transparency, and accountability. While several governance models have been proposed conceptually, there is a significant research gap regarding which approach is empirically preferred by experts and most practical for balancing innovation with responsible implementation.
Outcome
- Experts consistently and strongly preferred a hybrid, risk-based approach for governing algorithmic management systems. - This approach was perceived as the most effective in mitigating risks (like bias and privacy violations) while also demonstrating good adaptability to new technologies and high stakeholder acceptability. - The findings suggest that a 'one-size-fits-all' strategy is ineffective; instead, a pragmatic approach that tailors the intensity of governance to the level of potential harm is most suitable. - Purely rule-based approaches were seen as too rigid and slow to adapt, while purely principle-based approaches were considered difficult to enforce.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. Host: Today we're diving into a fascinating study called "Algorithmic Management: An MCDA-Based Comparison of Key Approaches". Host: It’s all about figuring out the best way for companies to govern the AI systems they use to manage their employees. Host: The researchers evaluated four different strategies to see which one experts prefer for managing these complex systems. I'm joined by our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Thanks for having me, Anna. Host: Alex, let's start with the big picture. More and more, algorithms are making decisions that used to be made by human managers—assigning tasks, monitoring performance, even hiring. What’s the core problem businesses are facing with this shift? Expert: The core problem is governance. As companies rely more on these powerful tools, they're struggling to ensure the systems are fair, transparent, and accountable. Expert: As the study points out, while algorithms can boost efficiency, they also raise serious concerns about worker autonomy, fairness, and the "black box" problem, where no one understands why an algorithm made a certain decision. Host: So it's a balancing act? Companies want the benefits of AI without the ethical and legal risks? Expert: Exactly. The study highlights that while many conceptual models for governance exist, there's been a real gap in understanding which approach is actually the most practical and effective. That’s what this research set out to discover. Host: How did the researchers tackle this? How do you test which governance model is "best"? Expert: They used a method called Multi-Criteria Decision Analysis, or MCDA. In simple terms, they identified four distinct models: a high-level Principle-Based approach, a strict Rule-Based approach, an industry-led Auditing-Based approach, and finally, a hybrid Risk-Based approach. Expert: They then gathered a panel of 27 experts from academia, industry, and government. These experts scored each approach against key criteria: its effectiveness, its feasibility to implement, its adaptability to new technology, and its acceptability to stakeholders. Host: So they're essentially using the collective wisdom of experts to find the most balanced solution. Expert: Precisely. It moves the conversation from a purely theoretical debate to one based on structured, evidence-based preferences from people in the field. Host: And what did this expert panel conclude? Was there a clear winner? Expert: There was, and it was quite decisive. The experts consistently and strongly preferred the hybrid, risk-based approach. The data shows it was ranked first by 21 of the 27 experts. Host: Why was that approach so popular? Expert: It was seen as the pragmatic sweet spot. The study shows it was rated highest for effectiveness in mitigating risks like bias or privacy violations, but it also scored very well on adaptability and stakeholder acceptability. It’s a practical middle ground. Host: What about the other approaches? What were their weaknesses? Expert: The study revealed clear trade-offs. The purely rule-based approach, with its strict regulations, was seen as too rigid and slow. It scored lowest on adaptability. Expert: On the other hand, the principle-based approach was rated as highly adaptable, but experts worried it was too abstract and difficult to actually enforce. In fact, it scored lowest on feasibility. Host: So the big message is that a one-size-fits-all strategy doesn't work. Expert: That's the crucial point. The findings strongly suggest that the best strategy is one that tailors the intensity of governance to the level of potential harm. Host: Alex, this is the key question for our listeners. What does a "risk-based approach" actually look like in practice for a business leader? Expert: It means you don't treat all your algorithms the same. The study gives a great example from a logistics company. An algorithm that simply optimizes delivery routes is low-risk. For that, your governance can be lighter, focusing on efficiency principles and basic monitoring. Expert: But an algorithm that has the autonomy to deactivate a driver's account based on performance metrics? That's extremely high-risk. Host: So what kind of extra controls would be needed for that high-risk system? Expert: The risk-based approach would demand much stricter controls. Things like mandatory human oversight for the final decision, regular audits for bias, full transparency for the driver on how the system works, and a clear, accessible process for them to appeal the decision. Host: So it's about being strategic. It allows companies to innovate with low-risk AI without getting bogged down, while putting strong guardrails around the most impactful decisions. Expert: Exactly. It's a practical roadmap for responsible innovation. It helps businesses avoid the trap of being too rigid, which stifles progress, or too vague, which invites ethical and legal trouble. Host: So, to sum up: as businesses use AI to manage people, the challenge is how to govern it responsibly. Host: This study shows that experts don't want rigid rules or vague principles. They strongly prefer a hybrid, risk-based approach. Host: This means classifying algorithmic systems by their potential for harm and tailoring governance accordingly—lighter for low-risk, and much stricter for high-risk applications. Host: It’s a pragmatic path forward for balancing innovation with accountability. Alex, thank you so much for breaking this down for us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we translate living knowledge into business impact.
Dealing Effectively with Shadow IT by Managing Both Cybersecurity and User Needs
Steffi Haag, Andreas Eckhardt
This study analyzes how companies can manage the use of unauthorized technology, known as Shadow IT. Through interviews with 44 employees across 34 companies, the research identifies four common approaches organizations take and provides 10 recommendations for IT leaders to effectively balance security risks with the needs of their employees.
Problem
Employees often use unapproved apps and services (Shadow IT) to be more productive, but this creates significant cybersecurity risks like data leaks and malware infections. Companies struggle to eliminate this practice without hindering employee efficiency. The challenge lies in finding a balance between enforcing security policies and meeting the legitimate technology needs of users.
Outcome
- Four distinct organizational archetypes for managing Shadow IT were identified, each resulting in different levels of unauthorized technology use (from very little to very frequent). - Shadow IT users are categorized into two types: tech-savvy 'Goal-Oriented Actors' (GOAs) who carefully manage risks, and less aware 'Followers' who pose a greater threat. - Effective management of Shadow IT is possible by aligning cybersecurity policies with user needs through transparent communication and responsive IT support. - The study offers 10 practical recommendations, including accepting the existence of Shadow IT, creating dedicated user experience teams, and managing different user types differently to harness benefits while minimizing risks.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a challenge every modern business faces: unauthorized technology in the workplace. We’ll be exploring a fascinating study titled, "Dealing Effectively with Shadow IT by Managing Both Cybersecurity and User Needs." Host: With me is our expert analyst, Alex Ian Sutherland. Alex, thanks for joining us. Expert: It's great to be here, Anna. Host: So, this study analyzes how companies can manage the use of unauthorized technology, known as Shadow IT. It identifies common approaches organizations take and provides recommendations for IT leaders. To start, Alex, what exactly is this "Shadow IT" and why is it such a big problem? Expert: Absolutely. Shadow IT is any software, app, or service that employees use for work without official approval from their IT department. Think of teams using Trello for project management, WhatsApp for quick communication, or Dropbox for file sharing, all because it helps them work faster. Host: That sounds pretty harmless. Employees are just trying to be more productive, right? Expert: That's the motivation, but it's a double-edged sword. While it can boost efficiency, it creates massive cybersecurity risks. The study points out that this practice can lead to data leaks, regulatory breaches like GDPR violations, and malware infections. In fact, research cited in the study suggests incidents linked to Shadow IT can cost a company over 4.8 million dollars. Host: Wow, that’s a significant risk. So how did the researchers in this study get to the bottom of this dilemma? Expert: They took a very direct approach. Over a period of more than three years, they conducted in-depth interviews with 44 employees across 34 different companies in various industries. This allowed them to understand not just what companies were doing, but how employees perceived and reacted to those IT policies. Host: And what were the big 'aha' moments from all that research? What did they find? Expert: They discovered a few crucial things. First, there's no one-size-fits-all approach. They identified four distinct patterns, or "archetypes," for how companies manage Shadow IT. These ranged from a media company with very strict security but also highly responsive IT support, which resulted in almost no Shadow IT, to a large automotive supplier with confusing rules and unhelpful IT, where Shadow IT was rampant. Host: So the company's own actions can either encourage or discourage this behavior. What else stood out? Expert: The second major finding was that not all users of Shadow IT are the same. The study categorizes them into two types. First, you have the 'Goal-Oriented Actors', or GOAs. These are tech-savvy employees who understand the risks and use unapproved tools carefully to achieve specific goals. Host: And the second type? Expert: The second type are 'Followers'. These employees often mimic the Goal-Oriented Actors but lack a deep understanding of the technology or the security implications. They pose a much greater risk to the organization. Host: That’s a critical distinction. So this brings us to the most important question for our listeners. Based on these findings, what should a business leader actually do? What are the key takeaways? Expert: The study provides ten clear recommendations, but I'll highlight three that are most impactful. First, and this is fundamental: accept that Shadow IT exists. You can’t completely eliminate it, so the goal should be to manage it effectively, not just ban it. Host: Okay, so acceptance is step one. What's next? Expert: Second, manage those two user types differently. Instead of punishing your tech-savvy 'Goal-Oriented Actors', leaders should harness their expertise. View them as an extension of your IT team. They can help identify useful new tools and pinpoint outdated security policies. For the 'Followers', the focus should be on education and providing them with better, approved tools so they don't have to look elsewhere. Host: That’s a really smart way to turn a problem into an asset. What’s the final takeaway? Expert: The third takeaway is to listen to your users. The study showed that Shadow IT thrives when official IT is slow, bureaucratic, and unresponsive. The researchers recommend creating a dedicated User Experience team, or at least a formal feedback channel, that actively works to solve employee IT challenges. When you meet user needs, you reduce their incentive to go into the shadows. Host: So, to summarize: Shadow IT is a complex issue, but it’s manageable. Leaders need to accept its existence, work with their savvy employees instead of against them, and most importantly, ensure their official IT support is responsive to what people actually need to do their jobs. Host: Alex, this has been incredibly insightful. Thank you for breaking down this complex topic for us. Expert: My pleasure, Anna. It’s a crucial conversation for any modern organization to be having. Host: And thank you to our audience for tuning in to A.I.S. Insights, powered by Living Knowledge. Join us next time as we uncover more valuable insights from the world of business and technology.
Shadow IT, Cybersecurity, IT Governance, User Needs, Risk Management, Organizational Culture, IT Policy
The Importance of Board Member Actions for Cybersecurity Governance and Risk Management
Jeffrey G. Proudfoot, W. Alec Cram, Stuart Madnick, Michael Coden
This study investigates the challenges boards of directors face in providing effective cybersecurity oversight. Drawing on in-depth interviews with 35 board members and cybersecurity experts, the paper identifies four core challenges and proposes ten specific actions boards can take to improve their governance and risk management capabilities.
Problem
Corporate boards are increasingly held responsible for cybersecurity governance, yet they are often ill-equipped to handle this complex and rapidly evolving area. This gap between responsibility and expertise creates significant risk for organizations, as boards may struggle to ask the right questions, properly assess risk, and provide meaningful oversight.
Outcome
- The study identified four primary challenges for boards: 1) inconsistent attitudes and governance approaches, 2) ineffective interaction dynamics with executives like the CISO, 3) a lack of sufficient cybersecurity expertise, and 4) navigating expanding and complex regulations. - Boards must acknowledge that cybersecurity is an enterprise-wide operational risk, not just an IT issue, and gauge their organization's cybersecurity maturity against industry peers. - Board members should focus on the business implications of cyber threats rather than technical details and must demand clear, jargon-free communication from executives. - To address expertise gaps, boards should determine their need for expert advisors and actively seek training, such as tabletop cyberattack simulations. - Boards must understand that regulatory compliance does not guarantee sufficient security and should guide the organization to balance compliance with proactive risk mitigation.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers, and with me today is our expert analyst, Alex Ian Sutherland. Host: Alex, today we’re diving into a crucial topic for every modern business: cybersecurity at the board level. We're looking at a study titled "The Importance of Board Member Actions for Cybersecurity Governance and Risk Management." Host: In a nutshell, this study explores the huge challenges boards of directors face with cyber oversight and gives them a clear, actionable roadmap to improve. Expert: Exactly, Anna. It’s a critical conversation because the stakes have never been higher. Host: Let’s start there. What is the big, real-world problem this study addresses? Why is board-level cybersecurity such a hot-button issue right now? Expert: The core problem is a massive gap between responsibility and capability. Boards are legally and financially responsible for overseeing cybersecurity, but many directors are simply not equipped for the task. They don't come from tech backgrounds. Expert: The study found this creates significant risk. One board member was quoted saying, "Every board knows that cyber is a threat... How they manage it is still the wild west." Host: The wild west. That’s a powerful image. It suggests a lack of clear rules or understanding. Expert: It's true. Boards often don't know the right questions to ask, how to interpret the technical reports they're given, or how to provide meaningful guidance. This leaves their organizations incredibly vulnerable. Host: So how did the researchers get this inside look at the boardroom? What was their approach? Expert: They went straight to the source. The research is based on in-depth interviews with 35 people on the front lines—current board members, CISOs, CEOs, and other senior executives from a wide range of industries, including finance, healthcare, and technology. Host: So they captured real-world experience, not just theory. What were some of the key challenges they uncovered? Expert: The study pinpointed four primary challenges, but two really stood out. First, inconsistent attitudes and governance approaches. And second, ineffective interaction dynamics between the board and the company's security executives. Host: Let's unpack that. What does an 'inconsistent attitude' look like in practice? Expert: It can be complacency. Some boards see a dashboard report that’s mostly ‘green’ and assume everything is fine, creating a false sense of security. Others might think that because they haven't been hit by a major attack yet, they won't be. It's a dangerous mindset. Host: And what about the 'ineffective interaction' with executives like the Chief Information Security Officer, or CISO? Expert: This is crucial. The study highlights a major communication breakdown. You can have a brilliant CISO who can’t explain risk in simple business terms. They get lost in technical jargon, and the board tunes out. One board member said when that happens, "you get the blank stares and no follow-up questions." Host: That communication gap sounds like the biggest risk of all. So this brings us to the most important question, Alex. Why does this matter for business, and what are the key takeaways for leaders listening right now? Expert: The study provides ten clear actions, which we can group into a few key takeaways. First is a mindset shift. The board must acknowledge that cybersecurity is an enterprise-wide operational risk, not just an IT problem. It belongs in the same category as financial or legal risk. Host: It’s a core business function. What’s next? Expert: Better communication. Boards must demand clarity. They should tell their security leaders, "Don't get into the technical weeds, focus on the business implications." It's not the board's job to pick the technology, but it is their job to understand the strategic risk. Host: So, focus on the 'what' and 'why,' not the 'how'. What about the expertise gap you mentioned earlier? How do boards solve that? Expert: They need a plan to bridge that gap. This doesn't mean every director needs to become a coder. It means deciding if they need to bring in an expert advisor or add a director with a cyber background. And crucially, it means training. Host: What kind of training is most effective? Expert: The study strongly recommends tabletop cyberattack simulations. These are essentially practice drills where the board and executive team walk through a realistic cyber crisis scenario. Host: Like a fire drill for a data breach. Expert: Precisely. It makes the threat real and reveals the weak points in your response plan before you’re in an actual crisis. It moves the plan from paper to practice. Host: And what’s the final key takeaway for our audience? Expert: It’s simple: compliance is not security. Checking off boxes for regulators does not guarantee your organization is protected. Boards must push management to go beyond the minimum requirements and focus on proactive, genuine risk mitigation. Host: That’s a fantastic summary, Alex. So, to recap for our listeners: Boards must own cybersecurity as a core business risk, demand clear, business-focused communication, proactively address their own expertise gaps through training and simulations, and remember that just being compliant isn't enough. Host: Alex Ian Sutherland, thank you so much for breaking down this vital research for us. Expert: My pleasure, Anna. Host: And a big thank you to our audience for tuning in. This has been A.I.S. Insights — powered by Living Knowledge.
Identifying and Filling Gaps in Operational Technology Cybersecurity
Abbatemarco Nico, Hans Brechbühl
This study identifies critical gaps in Operational Technology (OT) cybersecurity by drawing on insights from 36 leaders across 14 global corporations. It analyzes the organizational challenges that hinder the successful implementation of OT cybersecurity, going beyond purely technical issues. The research provides practical recommendations for managers to bridge these security gaps effectively.
Problem
As industrial companies embrace 'Industry 4.0', their operational technology (OT) systems, which control physical processes, are becoming increasingly connected to digital networks. This connectivity introduces significant cybersecurity risks that can halt production and cause substantial financial loss, yet many organizations struggle to implement robust security due to organizational, rather than technical, obstacles.
Outcome
- Cybersecurity in OT projects is often treated as an afterthought, bolted on at the end rather than integrated from the start. - Cybersecurity teams typically lack the authority, budget, and top management support needed to enforce security measures in OT environments. - There is a severe shortage of personnel with expertise in both OT and cybersecurity, and a cultural disconnect exists between IT and OT teams. - Priorities are often misaligned, with OT personnel focusing on uptime and productivity, viewing security measures as hindrances. - The tangible benefits of cybersecurity are difficult to recognize and quantify, making it hard to justify investments until a failure occurs.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're digging into a critical issue for any company with physical operations. We're looking at a new study from MIS Quarterly Executive titled "Identifying and Filling Gaps in Operational Technology Cybersecurity". In short, it explores the deep organizational challenges that stop businesses from properly securing the technology that runs their factories and industrial sites. Here to break it down for us is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the basics. We all hear about IT, or Information Technology. What is OT, Operational Technology, and why is it suddenly such a big concern? Expert: Of course. Think of OT as the technology that controls the physical world. It’s the hardware and software running everything from robotic arms on an assembly line to the control systems in a power plant. Historically, these systems were isolated, completely disconnected from the internet. But now, with Industry 4.0, companies are connecting them to their IT networks to get data and improve efficiency. Host: And connecting them opens the door to cyberattacks. Expert: A very big door. The study highlights that this isn't a theoretical risk. It points to a 100-150% surge in cyberattacks against the manufacturing sector in recent years. And an attack on OT isn't about stealing customer data; it’s about shutting down production. The study found a successful breach can cost a company anywhere from 3 to 7 million dollars per incident and halt operations for an average of four days. Host: That’s a massive business disruption. So how did the researchers in this study get to the root of why this is so hard to solve? Expert: They focused on the people and the organization, not just the tech. They conducted a series of in-depth focus groups with 36 senior leaders—people like Chief Information Officers and Chief Information Security Officers—from 14 major global corporations in manufacturing, energy, and logistics. They wanted to understand the human and structural roadblocks. Host: And what did these leaders say? What are the key findings? Expert: They found a consistent set of organizational gaps. The first is that cybersecurity is often treated as an afterthought. One security leader used the phrase "bolted on afterwards," which perfectly captures the problem. They build a new system and then try to wrap security around it at the end. Host: Why does that happen? Is it a technical oversight? Expert: It’s more of a cultural problem, which is the second major finding. There’s a huge disconnect between the IT cybersecurity teams and the OT plant-floor teams. The OT engineers prioritize uptime and productivity above all else. To them, a security update that requires shutting down a machine, even for an hour, is a direct hit to production value. Host: So the two teams have completely different priorities. Expert: Exactly. One director in the study described a situation where his factory team saw the central security staff as people who were just "reading a policy sheet," without understanding "what's really going on" in the plant. This leads to the third finding: cybersecurity teams in these environments often lack real authority, budget, and support from top management to enforce security rules. Host: I can imagine it's difficult to get budget to prevent a problem that hasn't happened yet. Expert: That's the final key finding. The study participants said the tangible benefits of good cybersecurity are almost invisible. It’s a classic case of "you don't know it's working until it fails." This makes it incredibly hard to justify the investment compared to, say, a new machine that will clearly increase output. Host: This is a complex organizational puzzle. So, for the business leaders listening, what are the practical takeaways? Why does this matter for them, and what can they do? Expert: This is the most important part. The study offers three clear recommendations that I'd frame as key business takeaways. First: you have to bridge the cultural divide. This isn't about IT forcing rules on OT. It’s about creating mutual understanding through cross-training, and even creating new roles for people who can speak both languages—technology and operations. The goal should be "Security by Design," baked in from the start. Host: So, build bridges, not walls. What's the second takeaway? Expert: Empower your security leadership. A Chief Information Security Officer, or CISO, needs real authority that extends to the factory floor, with the budget and C-suite backing to make critical decisions. One executive in the study recounted how it took a cyberattack simulation that showed the board how an incident could "bring us to our knees" to finally get the necessary support and funding. Host: It sounds like leadership needs to feel the risk to truly act on it. What’s the final piece of advice? Expert: Find the win-win. Don't frame cybersecurity as just a cost or a blocker. The study found that collaboration can lead to unexpected benefits. For instance, one company installed security monitoring tools, which had the side effect of giving the engineering team incredible new visibility into their own processes, which they then used to optimize the entire factory. Security actually became a business enabler. Host: That’s a powerful shift in perspective. To summarize, then: the growing risk to our industrial systems is fundamentally an organizational problem, not a technical one. The solution involves bridging the cultural gap between operations and security teams, empowering security leaders with real authority, and actively looking for ways that good security can also drive business value. Alex, this has been incredibly insightful. Thank you for joining us. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
Operational Technology, OT Cybersecurity, Industry 4.0, Cybersecurity Gaps, Risk Management, Industrial Control Systems, Technochange
How to Design a Better Cybersecurity Readiness Program
This study explores the common pitfalls of four types of cybersecurity training by interviewing employees at large accounting firms. It identifies four unintended negative consequences of mistraining and overtraining and, in response, proposes the LEAN model, a new framework for designing more effective cybersecurity readiness programs.
Problem
Organizations invest heavily in cybersecurity readiness programs, but these initiatives often fail due to poor design, leading to mistraining and overtraining. This not only makes the training ineffective but can also create adverse effects like employee anxiety and fatigue, paradoxically amplifying an organization's cyber vulnerabilities instead of reducing them.
Outcome
- Conventional cybersecurity training often leads to four adverse effects on employees: threat anxiety, security fatigue, risk passivity, and cyber hesitancy. - These individual effects cause significant organizational problems, including erosion of individual performance, fragmentation of team dynamics, disruption of client experiences, and stagnation of the security culture. - The study proposes the LEAN model to counteract these issues, based on four strategies: Localize, Empower, Activate, and Normalize. - The LEAN model recommends tailoring training to specific roles (Localize), fostering ownership and authority (Empower), promoting coordinated action through collaborative exercises (Activate), and embedding security into daily operations to build a proactive culture (Normalize).
Host: Welcome to A.I.S. Insights, the podcast where we connect Living Knowledge with business innovation. I'm your host, Anna Ivy Summers. Host: Today, we're diving into a fascinating new study called "How to Design a Better Cybersecurity Readiness Program." With me is our analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: This study explores the common pitfalls of cybersecurity training, looking at what happens when we mistrain or overtrain employees. More importantly, it proposes a new framework for getting it right. Host: So, Alex, let's start with the big picture. Companies are pouring billions into cybersecurity training. What's the problem this study identified? Expert: The problem is that much of that investment is wasted. The study shows that poorly designed training doesn't just fail to work; it can actually make things worse. Host: Worse? How so? Expert: Instead of reducing risk, it can create what the study calls adverse effects, like extreme anxiety about security, or a kind of burnout called security fatigue. Paradoxically, this can amplify an organization's vulnerabilities. Host: So our attempts to build a human firewall are actually creating cracks in it. How did the researchers uncover this? What was their approach? Expert: They went straight to the source. They conducted in-depth interviews with 23 employees at the four major U.S. accounting firms—organizations that are on the front lines of handling sensitive client data. Host: And what were the key findings from those interviews? What are these negative side effects you mentioned? Expert: The study identified four main consequences. The first is Threat Anxiety, where employees become so hyper-aware and fearful of making a mistake that their productivity drops. They second-guess every email they open. Host: I can imagine that. What's next? Expert: Second is Security Fatigue. This is cognitive burnout from constant alerts, repetitive training, and complex rules. Employees get overwhelmed and simply tune out, which is incredibly dangerous. Host: It sounds like alarm fatigue for the inbox. Expert: Exactly. The third is Risk Passivity, which is a paradoxical outcome. Some employees become so desensitized by constant warnings they start ignoring real threats. Others become paralyzed by the perceived risk of every action. Host: And the last one? Expert: The fourth is Cyber Hesitancy. This is a reluctance to use new tools or even collaborate with colleagues for fear of blame. It creates a culture of suspicion, not security. The study found this fragments team dynamics and stalls innovation. Host: These sound like serious cultural issues, not just IT problems. This brings us to the most important question for our listeners: Why does this matter for business, and what's the solution? Expert: It matters because the old approach is broken. The study proposes a new framework to fix it, called the LEAN model. It's an acronym for four key strategies. Host: Okay, break it down for us. What does LEAN stand for? Expert: The 'L' is for Localize. It means stop the one-size-fits-all training. Tailor the content to an employee's specific role. What an accountant needs to know is different from someone in marketing. Host: That makes sense. What about 'E'? Expert: 'E' is for Empower. This is about fostering ownership. Instead of just pushing rules, involve employees in creating and improving security protocols. This gives them a real stake in the outcome. Host: From passive recipient to active participant. I like it. What's 'A'? Expert: 'A' is for Activate. This means moving beyond solo quizzes to collaborative, team-based exercises. Let teams practice responding to a simulated threat together, fostering coordinated action and mastery. Host: And finally, 'N'? Expert: 'N' is for Normalize. This is the goal: embed security so deeply into daily operations that it becomes a natural part of the workflow, not a separate, dreaded task. It reframes security as a business enabler, not a barrier. Host: So, to summarize, it seems the core message is that our cybersecurity training is often counterproductive, creating negative effects like fatigue and anxiety. Host: The solution is a more human-focused, LEAN approach: Localize the training, Empower employees to take ownership, Activate teamwork through practice, and Normalize security into the company culture. Host: Alex, thank you for breaking that down for us. It’s a powerful new way to think about security. Expert: My pleasure, Anna. Host: And thank you to our listeners for tuning into A.I.S. Insights — powered by Living Knowledge. Join us next time as we explore the latest research impacting your business.
How Large Companies Can Help Small and Medium-Sized Enterprise (SME) Suppliers Strengthen Cybersecurity
Jillian K. Kwong, Keri Pearlson
This study investigates the cybersecurity challenges faced by small and medium-sized enterprise (SME) suppliers and proposes actionable strategies for large companies to help them improve. Based on interviews with executives and cybersecurity experts, the paper identifies key barriers SMEs encounter and outlines five practical actions large firms can take to strengthen their supply chain's cyber resilience.
Problem
Large companies increasingly require their smaller suppliers to meet the same stringent cybersecurity standards they do, creating a significant burden for SMEs with limited resources. This gap creates a major security vulnerability, as attackers often target less-secure SMEs as a backdoor to access the networks of larger corporations, posing a substantial third-party risk to entire supply chains.
Outcome
- SME suppliers are often unable to meet the security standards of their large partners due to four key barriers: unfriendly regulations, organizational culture clashes, variability in cybersecurity frameworks, and misalignment of business processes. - Large companies can proactively strengthen their supply chain by providing SMEs with the resources and expertise needed to understand and comply with regulations. - Creating incentives for meeting security benchmarks is more effective than penalizing suppliers for non-compliance. - Large firms should develop programs to help SMEs elevate their cybersecurity culture and align security processes with their own. - Coordinating with other large companies to standardize cybersecurity frameworks and assessment procedures can significantly reduce the compliance burden on SMEs.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I’m your host, Anna Ivy Summers. In today's interconnected world, your company’s security is only as strong as its weakest link. And often, that link is a small or medium-sized supplier.
Host: With me today is our analyst, Alex Ian Sutherland, to discuss a recent study titled, "How Large Companies Can Help Small and Medium-Sized Enterprise Suppliers Strengthen Cybersecurity." Alex, welcome.
Expert: Thanks for having me, Anna. This is a critical topic. The study investigates the cybersecurity challenges smaller suppliers face and, more importantly, proposes actionable strategies for large companies to help them improve.
Host: So let's start with the big problem here. Why is the gap in cybersecurity between large companies and their smaller suppliers such a major risk?
Expert: It’s a massive vulnerability. Large companies demand their smaller suppliers meet the same stringent security standards they do. But for an SME with limited staff and budget, that's often an impossible task. Attackers know this. They specifically target less-secure suppliers as a backdoor into the networks of their bigger clients.
Host: Can you give us a real-world example of that?
Expert: Absolutely. The study reminds us of the infamous 2013 data breach at Target. The hackers didn't attack Target directly at first. They got in using credentials stolen from a small, third-party HVAC vendor. That single point of entry ultimately exposed the data of over 100 million customers. It’s a classic case of the supply chain being the path of least resistance.
Host: A sobering reminder. So how did the researchers in this study approach such a complex issue?
Expert: They went straight to the source. The study is based on 27 in-depth interviews with executives, cybersecurity leaders, and supply chain managers from both large corporations and small suppliers. They gathered insights from people on the front lines who deal with these challenges every single day.
Host: And what were the biggest takeaways from those conversations? What did they find are the main barriers for these smaller companies?
Expert: The study identified four key barriers. The first is what they call "unfriendly regulation." Most cybersecurity rules are designed for big companies with legal and compliance departments. SMEs often lack the expertise to even understand them.
Host: So the rules themselves are a hurdle. What’s the second barrier?
Expert: Organizational culture clashes. For an SME, the primary focus is keeping the business running and getting products out the door. Cybersecurity can feel like a costly, time-consuming distraction, so it constantly gets pushed to the back burner.
Host: That makes sense. And the other two barriers?
Expert: Framework variability and process misalignment. Imagine being a small supplier for five different large companies, and each one asks you to comply with a slightly different security framework. One interviewee described it as "trying to navigate a sea of frameworks in a rowboat, without a map or radio." It creates a huge, confusing compliance burden.
Host: That's a powerful image. It really frames this as a partnership problem, not just a technology problem. So this brings us to the most important question for our listeners: what can businesses actually *do* about it?
Expert: This is the core of the study. It moves beyond just identifying problems to proposing five concrete actions large companies can take. First, provide your SME suppliers with the resources and expertise they lack. This could be workshops, access to your legal teams, or clear guidance on how to comply with regulations.
Host: So it's about helping, not just demanding. What’s the next action?
Expert: Create positive incentives. The study found that punishing suppliers for non-compliance is far less effective than rewarding them for meeting security benchmarks. One CTO put it perfectly: suppliers need to be rewarded for their security efforts, not just punished for failure. This changes the dynamic from a chore to a shared goal.
Host: I like that reframing. What else?
Expert: The third and fourth actions are linked. Large firms should develop programs to help SMEs elevate their security culture. And, crucially, they should coordinate with other large companies to standardize security frameworks and assessments. If competitors can agree on one common questionnaire, it saves every SME countless hours of redundant work.
Host: That seems like such a common-sense solution. What's the final recommendation?
Expert: Bring cybersecurity into the procurement process from the very beginning. Too often, security is an afterthought, brought in after a deal is already signed. This leads to delays and friction. By discussing security expectations upfront, you ensure it's a foundational part of the partnership.
Host: So, to summarize, this isn't about forcing smaller suppliers to fend for themselves. It’s about large companies taking proactive steps: providing resources, offering incentives, standardizing requirements, and making security a day-one conversation.
Expert: Exactly. The study’s main message is that strengthening your supply chain's cybersecurity is an act of partnership. When you help your suppliers become more secure, you are directly helping yourself.
Host: A powerful and practical takeaway. Alex, thank you for breaking this down for us.
Expert: My pleasure, Anna.
Host: And thanks to our audience for tuning in to A.I.S. Insights. Join us next time as we continue to explore the intersection of business, technology, and living knowledge.
Cybersecurity, Supply Chain Management, Third-Party Risk, Small and Medium-Sized Enterprises (SMEs), Cyber Resilience, Vendor Risk Management
How Boards of Directors Govern Artificial Intelligence
Benjamin van Giffen, Helmuth Ludwig
This study investigates how corporate boards of directors oversee and integrate Artificial Intelligence (AI) into their governance practices. Based on in-depth interviews with high-profile board members from diverse industries, the research identifies common challenges and provides examples of effective strategies for board-level AI governance.
Problem
Despite the transformative impact of AI on the business landscape, the majority of corporate boards struggle to understand its implications and their role in governing it. This creates a significant gap, as boards have a fiduciary responsibility to oversee strategy, risk, and investment related to critical technologies, yet AI is often not a mainstream boardroom topic.
Outcome
- Identified four key groups of board-level AI governance issues: Strategy and Firm Competitiveness, Capital Allocation, AI Risks, and Technology Competence. - Boards should ensure AI is integrated into the company's core business strategy by evaluating its impact on the competitive landscape and making it a key topic in annual strategy meetings. - Effective capital allocation involves encouraging AI experimentation, securing investments in foundational AI capabilities, and strategically considering external partnerships and acquisitions. - To manage risks, boards must engage with experts, integrate AI-specific risks into Enterprise Risk Management (ERM) frameworks, and address ethical, reputational, and legal challenges. - Enhancing technology competence requires boards to develop their own AI literacy, review board and committee composition for relevant expertise, and include AI competency in executive succession planning.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a critical topic for every company leader: governance. Specifically, we're looking at a fascinating new study titled "How Boards of Directors Govern Artificial Intelligence."
Host: It investigates how corporate boards oversee and integrate AI into their governance practices, based on interviews with high-profile board members. Here to break it all down for us is our analyst, Alex Ian Sutherland. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: Let's start with the big picture. We hear a lot about AI's potential, but what's the real-world problem this study is trying to solve for boards?
Expert: The problem is a major governance gap. The study points out that while AI is completely reshaping the business landscape, most corporate boards are struggling to understand it. They have a fiduciary duty to oversee strategy, risk, and major investments, but AI often isn't even a mainstream topic in the boardroom.
Host: So, management might be racing ahead with AI, but the board, the ultimate oversight body, is being left behind?
Expert: Exactly. And that's risky. AI requires huge, often uncertain, capital investments. It also introduces entirely new legal, ethical, and reputational risks that many boards are simply not equipped to handle. This gap between the technology's impact and the board's understanding is what the study addresses.
Host: How did the researchers get inside the boardroom to understand this dynamic? What was their approach?
Expert: They went straight to the source. The research is based on a series of in-depth, confidential interviews with sixteen high-profile board members from a huge range of industries—from tech and finance to healthcare and manufacturing. They also spoke with executive search firms to understand what companies are looking for in new directors.
Host: So, based on those conversations, what were the key findings? What are the big themes boards need to be thinking about?
Expert: The study organized the challenges into four key groups. The first is Strategy and Firm Competitiveness. Boards need to ensure AI is actually integrated into the company’s core strategy, not just a flashy side project.
Host: Meaning they should be asking how AI will help the company win in the market?
Expert: Precisely. The second is Capital Allocation. This is about more than just signing checks. It's about encouraging experimentation—what the study calls ‘lighthouse projects’—and making strategic investments in foundational capabilities, like data platforms, that will pay off in the long run.
Host: That makes sense. What's the third group?
Expert: AI Risks. This is a big one. We're not just talking about a system crashing. Boards need to oversee ethical risks, like algorithmic bias, and major reputational and legal risks. The recommendation is to integrate these new AI-specific risks directly into the company’s existing Enterprise Risk Management framework.
Host: And the final one?
Expert: It's called Technology Competence. And this is crucial—it applies to the board itself.
Host: Does that mean every board director needs to become a data scientist?
Expert: Not at all. It’s about developing AI literacy—understanding the business implications. The study found that leading boards are actively reviewing their composition to ensure they have relevant expertise and, importantly, they're including AI competency in CEO and executive succession planning.
Host: That brings us to the most important question, Alex. For the business leaders and board members listening, why does this matter? What is the key takeaway they can apply tomorrow?
Expert: The most powerful and immediate thing a board can do is start asking the right questions. The board's role isn't necessarily to have all the answers, but to guide the conversation and ensure management is thinking through the critical issues.
Host: Can you give us an example of a question a director should be asking?
Expert: Certainly. For strategy, they could ask: "How are our competitors using AI, and how does our approach give us a competitive advantage?" On risk, they might ask: "What is our framework for evaluating the ethical risks of a new AI system before it's deployed?" These questions signal the board's priorities and drive accountability.
Host: So, the first step is simply opening the dialogue.
Expert: Yes. That's the catalyst. The study makes it clear that in many companies, if the board doesn't start the conversation on AI governance, no one will.
Host: A powerful call to action. To summarize: this study shows that boards have a critical and urgent role in governing AI. They need to focus on four key areas: weaving AI into strategy, allocating capital wisely, managing new and complex risks, and building their own technological competence.
Host: And the journey begins with asking the right questions. Alex Ian Sutherland, thank you for these fantastic insights.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the ideas shaping business and technology.
AI governance, board of directors, corporate governance, artificial intelligence, strategic management, risk management, technology competence
Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks
Donald Wynn, Jr., W. David Salisbury, Mark Winemiller
This paper presents a case study of a small U.S. manufacturing company that suffered two distinct ransomware attacks four years apart, despite strengthening its cybersecurity after the first incident. The study analyzes both attacks, the company's response, and the lessons learned from the experiences. The goal is to provide actionable recommendations to help other small and medium-sized enterprises (SMEs) improve their defenses and recovery strategies against evolving cyber threats.
Problem
Small and medium-sized enterprises (SMEs) face unique cybersecurity challenges due to significant resource constraints compared to larger corporations. They often lack the financial capacity, specialized expertise, and trained workforce to implement and maintain adequate technical and procedural controls. This vulnerability is increasingly exploited by cybercriminals, with a high percentage of ransomware attacks specifically targeting these smaller, less-defended businesses.
Outcome
- All businesses are targets: The belief in 'security by obscurity' is a dangerous misconception; any online presence makes a business a potential target for cyberattacks. - Comprehensive backups are essential: Backups must include not only data but also system configurations and software to enable a full and timely recovery. - Management buy-in is critical: Senior leadership must understand the importance of cybersecurity and provide the necessary funding and organizational support for robust defense measures. - People are a key vulnerability: Technical defenses can be bypassed by human error, as demonstrated by the second attack which originated from a phishing email, underscoring the need for continuous employee training. - Cybercrime is an evolving 'arms race': Attackers are becoming increasingly sophisticated, professional, and organized, requiring businesses to continually adapt and strengthen their defenses.
Host: Welcome to A.I.S. Insights — powered by Living Knowledge. I'm your host, Anna Ivy Summers. Today we're diving into a story that serves as a powerful warning for any business operating online. We're looking at a study titled, "Experiences and Lessons Learned at a Small and Medium-Sized Enterprise (SME) Following Two Ransomware Attacks".
Host: With me is our analyst, Alex Ian Sutherland. Alex, this study follows a small U.S. manufacturing company that was hit by ransomware not once, but twice, despite strengthening its security after the first incident. It’s a real-world look at how businesses can defend and recover from these evolving threats.
Expert: It is, Anna. And it's a critical topic.
Host: So, let's start with the big problem. We often hear about massive corporations getting hacked. Why does this study focus on smaller businesses?
Expert: Because they are the primary target. SMEs face unique challenges due to resource constraints. They often lack the financial capacity or specialized staff to build robust cyber defenses. The study points out that a huge percentage of ransomware attacks—over 80% in some reports—are aimed specifically at these smaller, less-defended companies. Cybercriminals see them as easy targets.
Host: To explore this, what approach did the researchers take?
Expert: They conducted an in-depth case study of one company. By focusing on this single manufacturing firm, they could analyze the two attacks in detail—one in 2017 and a second, more advanced attack in 2021. They documented the company's response, the financial and operational impact, and the critical lessons learned from both experiences.
Host: Getting hit twice provides a unique perspective. What was the first major finding from this?
Expert: The first and most fundamental finding was that all businesses are targets. Before the 2017 attack, the company’s management believed in 'security by obscurity'—they thought they were too small and not in a high-value industry like finance to be of interest. That was a costly mistake.
Host: A wake-up call, for sure. After that first attack, they tried to recover. What did they learn from that process?
Expert: They learned that comprehensive backups are absolutely essential. They had backups of their data, but not their system configurations or software. This meant recovery was a slow, painful process of rebuilding servers from scratch, leading to almost two weeks of downtime for critical systems.
Host: That kind of downtime could kill a small business. You mentioned management's mindset was a problem initially. Did that change?
Expert: It changed overnight. The third finding is that management buy-in is critical. The IT director had struggled to get funding for security before the attack. Afterwards, the threat became real. He was promoted to Vice President, and the study quotes him saying, “Finding cybersecurity dollars was no longer difficult.”
Host: So with new funding and better technology, they were prepared. But they still got hit a second time. How did that happen?
Expert: This highlights the fourth key finding: people are a key vulnerability. The second, more sophisticated attack in 2021 didn't break through a firewall; it walked in the front door through a phishing email that a single employee clicked. It proved that technology alone isn't enough.
Host: It's a classic problem. And what did that second attack reveal about the attackers themselves?
Expert: It showed that cybercrime is an evolving 'arms race'. The first attack was relatively crude. The second was from a highly professional ransomware group called REvil, which operates like a criminal franchise. They used a 'double extortion' tactic—not just encrypting the company's data, but also stealing it and threatening to release sensitive HR files publicly.
Host: That's terrifying. So, Alex, this is the most important question for our listeners. What are the practical takeaways? Why does this matter for their business?
Expert: There are four key actions every business leader should take. First, accept that you are a target, no matter your size or industry. Budget for cybersecurity proactively, don't wait for a disaster.
Expert: Second, ensure your backups are truly comprehensive and test your disaster recovery plan. You need to be able to restore entire systems, not just data, and you need to know that it actually works.
Expert: Third, invest in your people. Continuous security awareness training is not optional; it’s one of your most effective defenses against threats like phishing that target human error.
Expert: And finally, build relationships with external experts *before* you need them. For the second attack, the company had an incident response firm on retainer. Having experts to call immediately made a massive difference. You don’t want to be looking for help in the middle of a crisis.
Host: Powerful advice. To summarize: assume you're a target, build and test a full recovery plan, train your team relentlessly, and have experts on speed dial. This isn't just a technology problem; it's a business continuity problem.
Host: Alex Ian Sutherland, thank you for sharing these critical insights with us.
Expert: My pleasure, Anna.
Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we translate academic research into actionable business strategy.
ransomware, cybersecurity, SME, case study, incident response, cyber attack, information security
Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks
This study investigates the need for flexibility and speed in creating and updating cybersecurity rules within organizations. Through in-depth interviews with cybersecurity professionals, the research identifies key areas of digital risk and provides practical recommendations for businesses to develop more agile and adaptive security policies.
Problem
In the face of rapidly evolving cyber threats, many organizations rely on static, outdated cybersecurity policies that are only updated after a security breach occurs. This reactive approach leaves them vulnerable to new attack methods, risks from new technologies, and threats from business partners, creating a significant security gap.
Outcome
- Update cybersecurity policies to address risks from outdated legacy systems by implementing modern digital asset and vulnerability management. - Adapt policies to address emerging technologies like AI by enhancing technology scouting and establishing a resilient cyber risk management framework. - Strengthen policies for third-party vendors by conducting agile risk assessments and regularly reviewing security controls in contracts. - Build flexible policies for disruptive external events (like pandemics or geopolitical tensions) through continuous employee training and robust business continuity plans.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Today, we're diving into a study that tackles a critical issue for every modern business: cybersecurity. The study is titled, "Adopt Agile Cybersecurity Policymaking to Counter Emerging Digital Risks".
Host: It explores the urgent need for more speed and flexibility in how organizations create and update their security rules. We’re joined by our expert analyst, Alex Ian Sutherland, to break it down for us. Alex, welcome.
Expert: Thanks for having me, Anna.
Host: Let's start with the big picture. Why is this topic so important right now? What's the problem this study is addressing?
Expert: The core problem is that many businesses are trying to fight tomorrow's cyber threats with yesterday's rulebook. They often rely on static, outdated cybersecurity policies.
Host: What do you mean by static?
Expert: It means the policies are written once and then left on a shelf. They’re often only updated after the company suffers a major security breach. This reactive approach leaves them completely exposed to new attack methods, risks from new technology like AI, and even threats coming from their own business partners. It creates a massive security gap.
Host: So businesses are always one step behind. How did the researchers investigate this? What was their approach?
Expert: They went directly to the front lines. The study is based on in-depth interviews with nine senior cybersecurity leaders—people like Chief Information Security Officers and CTOs from a range of industries, including finance, technology, and telecommunications. They wanted to understand the real-world pressures and challenges these leaders face in keeping their policies effective.
Host: And what were the key findings? What are the biggest risks that demand this new, agile approach?
Expert: The study pinpointed four primary risk areas. The first is internal: outdated legacy systems. These are old software or hardware that are critical to the business but can't be easily updated to defend against modern threats.
Host: And the other three?
Expert: The other three are external. The second is the rapid pace of emerging technologies. For instance, one expert described how hackers can now use AI to clone a manager’s voice, call an employee, and trick them into revealing a password. An old policy manual won't have a procedure for that.
Host: That's terrifying. What's the third risk area?
Expert: Attacks via third parties, which is a huge one. Hackers don't attack you directly; they attack your software supplier or a contractor who has access to your systems. This is often called a supply chain attack.
Host: And the final one?
Expert: The fourth risk is disruptive external events. Think about the COVID-19 pandemic. Suddenly, everyone had to work from home, often on personal devices connecting to the company network. This required a massive, immediate change in security policy that most organizations were not prepared for.
Host: That really puts it into perspective. So, Alex, this brings us to the most important question for our listeners: why does this matter for their business, and what can they do about it?
Expert: This is the critical takeaway. The study provides a clear roadmap. It’s about shifting from a passive, 'set-it-and-forget-it' mentality to an active, continuous cycle of security improvement.
Host: Can you give us some concrete actions?
Expert: Certainly. For legacy systems, the study recommends implementing modern digital asset management. You must know what systems you have, what data they hold, and how vulnerable they are. For emerging tech like AI, it’s about proactive 'technology scouting' to anticipate new threats and having a resilient risk management framework to assess them quickly.
Host: What about those third-party risks?
Expert: Here, the study emphasizes strengthening vendor risk management. One interviewee told a story about their company losing its entire code base because a password manager they used was hacked. The lesson was clear: you need to conduct agile risk assessments of your suppliers and build clear security controls directly into your contracts. Don't just trust; verify.
Host: And for preparing for those big, disruptive events?
Expert: It comes down to two things: continuous employee training and robust business continuity plans that are tested regularly. When a crisis hits, your people need to know the procedures, and your policies need to be flexible enough to adapt without compromising security.
Host: This has been incredibly insightful. So, to sum it up, the old way of writing a security policy once every few years is no longer enough. Businesses need to treat cybersecurity policy as a living document.
Expert: Exactly. It needs to be agile and adaptive, constantly evolving to meet new threats head-on.
Host: That’s a powerful message for every leader. Alex Ian Sutherland, thank you so much for breaking down this crucial study for us.
Expert: My pleasure, Anna.
Host: And thank you to our audience for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we translate another key piece of research into actionable business intelligence.
agile cybersecurity, cybersecurity policymaking, digital risk, adaptive security, risk management, third-party risk, legacy systems