Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making
Hüseyin Hussein Keke, Daniel Eisenhardt, Christian Meske
This study investigates how to encourage more thoughtful and analytical decision-making when people use Generative AI (GenAI). Through an experiment with 130 participants, researchers tested an interaction design where users first made their own decision on a problem-solving task before receiving AI assistance. This sequential approach was compared to conditions where users received AI help concurrently or not at all.
Problem
When using GenAI tools for decision support, humans have a natural tendency to rely on quick, intuitive judgments rather than engaging in deep, analytical thought. This can lead to suboptimal decisions and increases the risks associated with relying on AI, as users may not critically evaluate the AI's output. The study addresses the challenge of designing human-AI interactions that promote a shift towards more reflective thinking.
Outcome
- Requiring users to make an initial decision before receiving GenAI help (a sequential approach) significantly improved their final decision-making performance. - This sequential interaction method was more effective than providing AI assistance at the same time as the task (concurrently) or providing no AI assistance at all. - Users who made an initial decision first were more likely to use the available AI prompts, suggesting a more deliberate engagement with the technology. - The findings suggest that this sequential design acts as a 'cognitive nudge,' successfully shifting users from fast, intuitive thinking to slower, more reflective analysis.
Host: Welcome to A.I.S. Insights, the podcast at the intersection of business and technology, powered by Living Knowledge. I’m your host, Anna Ivy Summers. Host: Today, we’re diving into how we can make smarter decisions when using tools like ChatGPT. We’re looking at a fascinating new study titled "Thinking Twice: A Sequential Approach to Nudge Towards Reflective Judgment in GenAI-Assisted Decision Making." Host: In short, it investigates how to encourage more thoughtful, analytical decision-making when we get help from Generative AI. And to help us unpack this, we have our expert analyst, Alex Ian Sutherland. Alex, welcome. Expert: Great to be here, Anna. Host: Alex, let's start with the big picture. We all use these new AI tools, and they feel like a massive shortcut. What's the problem this study is trying to solve? Expert: The problem is that we're a bit too quick to trust those shortcuts. The study is based on a concept called Dual Process Theory, which says we have two modes of thinking. There’s ‘System 1’, which is fast, intuitive, and gut-reaction. And there’s ‘System 2’, which is slow, analytical, and deliberate. Host: So, like deciding what to have for lunch versus solving a complex math problem. Expert: Exactly. And when we use Generative AI, we tend to stay in that fast, System 1 mode. We ask a question, get an answer, and accept it without much critical thought. This can lead to suboptimal decisions because we're not truly engaging our analytical brain or questioning the AI's output. Host: That makes sense. We offload the thinking. So how did the researchers in this study try to get people to slow down and actually think? Expert: They ran a clever experiment with 130 participants. They gave them tricky brain teasers—problems that are designed to fool your intuition, like the famous Monty Hall problem. Host: Ah, the one with the three doors and the car! I always get that wrong. Expert: Most people do, initially. The participants were split into three groups. One group got no AI help. A second group got AI assistance concurrently, meaning they could ask ChatGPT for help right away. Host: And the third group? Expert: This was the key. The third group used a 'sequential' approach. They had to submit their own answer to the brain teaser *first*, before they were allowed to see what the AI had to say. Only then could they review the AI's logic and submit a final answer. Host: So they were forced to think for themselves before leaning on the technology. Did this 'think first' approach actually work? What were the key findings? Expert: It worked remarkably well. The group that had to make an initial decision first—the sequential group—had the best performance by a wide margin. Their final decisions were correct about 67% of the time. Host: And how does that compare to the others? Expert: It’s a huge difference. The group with immediate AI help was right only 49% of the time, and the group with no AI at all was correct just 33% of the time. So, thinking first, then consulting the AI, was significantly more effective than either going it alone or using the AI as an immediate crutch. Host: That’s a powerful result. Was there anything else that stood out? Expert: Yes. The 'think first' group also engaged more deeply with the AI. They used more than double the number of AI prompts compared to the group that had concurrent access. It suggests that by forming their own opinion first, they became more curious and critical, using the AI to test their own logic rather than just get a quick answer. Host: This is fascinating, but let's translate it for our audience. Why does this matter for a business leader or a manager? Expert: This is the most crucial part. It has direct implications for how we should design business workflows that involve AI. It tells us that the user interface and the process matter immensely. Host: So it's not just about having the tool, but *how* you use it. Expert: Precisely. For any high-stakes decision—like financial forecasting, market strategy, or even reviewing legal documents—businesses should build in a moment of structured reflection. Instead of letting a team just ask an AI for a strategy, the workflow should require the team to develop their own initial proposal first. Host: You’re describing a kind of "speed bump" for the brain. Expert: It's exactly that. A cognitive nudge. This sequential process forces employees to form an opinion, which makes them more likely to spot discrepancies or weaknesses in the AI’s suggestion. It transforms the AI from a simple answer machine into a true collaborator—a sparring partner that sharpens your own thinking. Host: So this could be a practical way to avoid groupthink and prevent that blind over-reliance on technology we hear so much about. Expert: Yes. It builds a more resilient and critically-minded workforce. By making people think twice, you get better decisions and you train your employees to be more effective partners with AI, not just passive consumers of it. Host: A powerful insight. Let's summarize for our listeners. We often use GenAI with our fast, intuitive brain, which can lead to errors. Host: But this study shows that a simple process change—requiring a person to make their own decision *before* getting AI help—dramatically improves performance. Host: For businesses, this means designing workflows that encourage reflection first, turning AI into a tool that challenges and refines our thinking, rather than replacing it. Host: Alex, this has been incredibly insightful. Thank you for breaking it down for us. Expert: My pleasure, Anna. Host: And thank you for tuning into A.I.S. Insights, powered by Living Knowledge. Join us next time as we continue to explore the ideas shaping our world.
Dual Process Theory, Digital Nudging, Cognitive Forcing, Generative AI, Decision Making