Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law
Ben Möllmann, Leonardo Banh, Jan Laufer, and Gero Strobel
This study explores the critical role of user trust in the adoption of Generative AI assistants within the specialized domain of tax law. Employing a mixed-methods approach, researchers conducted quantitative questionnaires and qualitative interviews with legal experts using two different AI prototypes. The goal was to identify which design factors are most effective at building trust and encouraging use.
Problem
While Generative AI can assist in fields like tax law that require up-to-date research, its adoption is hindered by issues like lack of transparency, potential for bias, and inaccurate outputs (hallucinations). These problems undermine user trust, which is essential for collaboration in high-stakes professional settings where accuracy is paramount.
Outcome
- Transparency, such as providing clear source citations, was a key factor in building user trust. - Human-like features (anthropomorphism), like a conversational greeting and layout, positively influenced user perception and trust. - Compliance with social and ethical norms, including being upfront about the AI's limitations, was also found to enhance trustworthiness. - A higher level of trust in the AI assistant directly leads to an increased intention among professionals to use the tool in their work.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into a fascinating new study called “Trust Me, I'm a Tax Advisor: Influencing Factors for Adopting Generative AI Assistants in Tax Law.” Host: It explores a huge question: In a specialized, high-stakes field like tax law, what makes a professional actually trust an AI assistant? And how can we design AI that people will actually use? With me is our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: So, let's start with the big picture. We hear a lot about AI's potential, but this study highlights a major roadblock, especially in professional fields. What's the core problem they're addressing? Expert: The core problem is trust. Generative AI can be incredibly powerful for tasks like legal research, which requires sifting through constantly changing laws and rulings. But these tools can also make mistakes, invent sources—what we call 'hallucinations'—and their reasoning can be a total 'black box.' Host: And in tax law, a mistake isn't just a typo. Expert: Exactly. As the study points out, a misplaced trust in an AI’s output can lead to severe financial penalties for a client, or even malpractice litigation for the attorney. When the stakes are that high, you're not going to use a tool you don't fundamentally trust. That lack of trust is the biggest barrier to adoption. Host: So how did the researchers measure something as subjective as trust? What was their approach? Expert: They used a really clever mixed-methods approach. They built two different prototypes of a Generative AI tax assistant. The first was a basic, no-frills tool. The second prototype was designed specifically to build trust. Host: How so? What was different about it? Expert: It had features we'll talk about in a moment. They then had a group of legal experts perform real-world tax research tasks using both prototypes. Afterwards, the researchers gathered feedback through detailed questionnaires and in-depth interviews to see which version the experts trusted more, and why. Host: A direct head-to-head comparison. I love that. So, what were the key findings? What are the secret ingredients for building a trustworthy AI? Expert: The results were incredibly clear, and they came down to three main factors. First, transparency was paramount. The prototype that clearly cited its sources for every piece of information was trusted far more. Host: So users could check the AI's work, essentially. Expert: Precisely. One expert in the study was quoted as saying the system was "definitely more trustworthy, precisely because the sources have been specified." It gives the user a sense of control and verification. Host: That makes perfect sense. What was the second factor? Expert: The second was what the study calls 'anthropomorphism'—basically, making the AI feel more human-like. The more trusted prototype had a conversational greeting and a familiar chat layout. Experts said it made them feel "more familiar and better supported." Host: It’s interesting that a simple design choice can have such a big impact on trust. Expert: It is. And the third factor was just as fascinating: the AI’s honesty about its own limitations. Host: You mean the AI admitting what it *can't* do? Expert: Yes. The trusted prototype included an introduction that mentioned its capabilities and its limits. The experts saw this not as a weakness, but as a sign of reliability. Being upfront about its boundaries actually made the AI seem more trustworthy. Host: Transparency, a human touch, and a bit of humility. It sounds like a recipe for a good human colleague, not just an AI. Alex, let's get to the bottom line. What does this all mean for business leaders listening right now? Expert: This is the most important part. For any business implementing AI, especially for expert users, this study provides a clear roadmap. The biggest takeaway is that you have to design for trust, not just for function. Host: What does that look like in practice? Expert: It means for any AI that provides information—whether to your legal team, your financial analysts, or your engineers—it must be able to show its work. Building in transparent, clickable source citations isn't an optional feature; it's essential for adoption. Host: Okay, so transparency is job one. What else? Expert: Don't underestimate the user interface. A sterile, purely functional tool might be technically perfect, but a more conversational and intuitive design can significantly lower the barrier to entry and make users more comfortable. User experience directly impacts trust. Host: And that third point about limitations seems critical for managing expectations. Expert: Absolutely. Be upfront with your teams about what your new AI tool is good at and where it might struggle. Marketing might want to sell it as a magic bullet, but for actual adoption, managing expectations and being honest about limitations builds the long-term trust you need for the tool to succeed. Host: So, to recap for our listeners: if you're rolling out AI tools, the key to getting your teams to actually use them is building trust. And you do that through transparency, like citing sources; a thoughtful, human-centric design; and being honest about the AI’s limitations. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning in to A.I.S. Insights. We’ll see you next time.