Towards an AI-Based Therapeutic Assistant to Enhance Well-Being: Preliminary Results from a Design Science Research Project
Katharina-Maria Illgen, Enrico Kochon, Sergey Krutikov, and Oliver Thomas
This study introduces ELI, an AI-based therapeutic assistant designed to complement traditional therapy and enhance well-being by providing accessible, evidence-based psychological strategies. Using a Design Science Research (DSR) approach, the authors conducted a literature review and expert evaluations to derive six core design objectives and develop a simulated prototype of the assistant.
Problem
Many individuals lack timely access to professional psychological support, which has increased the demand for digital interventions. However, the growing reliance on general AI tools for psychological advice presents risks of misinformation and lacks a therapeutic foundation, highlighting the need for scientifically validated, evidence-based AI solutions.
Outcome
- The study established six core design objectives for AI-based therapeutic assistants, focusing on empathy, adaptability, ethical standards, integration, evidence-based algorithms, and dependable support. - A simulated prototype, named ELI (Empathic Listening Intelligence), was developed to demonstrate the implementation of these design principles. - Expert evaluations rated ELI positively for its accessibility, usability, and empathic support, viewing it as a beneficial tool for addressing less severe psychological issues and complementing traditional therapy. - Key areas for improvement were identified, primarily concerning data privacy, crisis response capabilities, and the need for more comprehensive therapeutic approaches.
Host: Welcome to A.I.S. Insights, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a study that sits at the intersection of artificial intelligence and mental well-being. It’s titled, "Towards an AI-Based Therapeutic Assistant to Enhance Well-Being: Preliminary Results from a Design Science Research Project." Host: In essence, the study introduces an AI assistant named ELI, designed to complement traditional therapy and make evidence-based psychological strategies more accessible to everyone. Here to break it all down for us is our analyst, Alex Ian Sutherland. Welcome, Alex. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. What is the real-world problem that a tool like ELI is trying to solve? Expert: The core problem is access. The study highlights that many people simply can't get timely psychological support. This has led to a surge in demand for digital solutions. Host: So people are turning to technology for help? Expert: Exactly. But there's a risk. The study points out that many are using general AI tools, like ChatGPT, for psychological advice, or even self-diagnosing based on social media trends. These sources often lack a scientific or therapeutic foundation, which can lead to dangerous misinformation. Host: So there’s a clear need for a tool that is both accessible and trustworthy. How did the researchers approach building such a system? Expert: They used a methodology called Design Science Research. Instead of just building a piece of technology and hoping it works, this is a very structured, iterative process. Host: What does that look like in practice? Expert: It means they started with a comprehensive review of existing psychological and technical literature. Then, they worked directly with psychology experts to define core requirements. From there, they built a simulated prototype, got feedback from the experts, and used that feedback to refine the design. It's a "build, measure, learn" cycle that ensures the final product is grounded in real science and user needs. Host: That sounds incredibly thorough. After going through that process, what were some of the key findings? Expert: The first major outcome was a set of six core design objectives for any AI therapeutic assistant. These are essentially the guiding principles for building a safe and effective tool. Host: Can you give us a few examples of those principles? Expert: Certainly. They focused heavily on things like empathy and trust, ensuring the AI could build a therapeutic relationship. Another was basing all interventions on evidence-backed methods, like Cognitive Behavioral Therapy. And crucially, establishing strong ethical standards, especially around data privacy and having clear crisis response mechanisms. Host: So they created the principles, and then built a prototype based on them called ELI. How was it received? Expert: The expert evaluations were quite positive. Psychologists rated the ELI prototype highly for its usability, its accessibility via smartphone, and its empathic support. They saw it as a valuable tool, especially for helping with less severe issues or providing support between traditional therapy sessions. Host: That sounds promising, but were there any concerns? Expert: Yes, and they're important. The experts identified key areas for improvement. Data privacy was a major one—users need to know exactly how their sensitive information is being handled. They also stressed the need for more robust crisis response capabilities, for instance, in detecting if a user is in immediate danger. Host: That brings us to the most important question for our listeners. Alex, why does this study matter for the business world? Expert: It matters on several fronts. First, for any leader concerned with employee wellness, this provides a blueprint for a scalable support tool. An AI like ELI could be integrated into corporate wellness programs to help manage stress and prevent burnout before it becomes a crisis. Host: A proactive tool for mental health in the workplace. What else? Expert: For the tech industry, this is a roadmap for responsible innovation. The study's design objectives offer a clear framework for developing AI health tools that are ethical, evidence-based, and build user trust. It moves beyond the "move fast and break things" mantra, which is essential in healthcare. Host: So it’s about building trust with the user, which is key for any business. Expert: Absolutely. The findings on user privacy and the need for transparency are a critical lesson for any company handling personal data, not just in healthcare. Building a trustworthy product isn't just an ethical requirement; it's a competitive advantage. This study shows that when it comes to well-being, you can't afford to get it wrong. Host: A powerful insight. Let's wrap it up there. What is the one key takeaway we should leave with? Host: Today we learned about ELI, an AI therapeutic assistant built on a foundation of rigorous research. The study shows that while AI holds immense potential to improve access to well-being support, its success and safety depend entirely on a thoughtful, evidence-based, and deeply ethical design process. Host: Alex Ian Sutherland, thank you so much for your insights today. Expert: My pleasure, Anna. Host: And thank you to our audience for tuning into A.I.S. Insights. Join us next time as we continue to explore the intersection of technology and business.
AI Therapeutics, Well-Being, Conversational Assistant, Design Objectives, Design Science Research