Do reflection questions help people make better decisions with AI?
This explores whether conversational AI that prompts users to think through problems outperforms AI that simply provides answers. Understanding this matters for designing AI tools that genuinely improve human judgment rather than replace it.
Through a lab study (N=80), LLM-based "Thinking Assistants" that combine asking reflection questions with providing advice outperform conversational agents that only ask questions, only provide advice, or neither. The key insight: "Rather than adhering to the prevailing authoritative approach of generating definitive answers, LLM agents aimed at assisting with cognitive enhancement should prioritize fostering reflection. They should initially provide responses designed to prompt thoughtful consideration through inquiring, followed by offering advice only after gaining a deeper understanding of the user's context and needs."
This directly challenges the default LLM interaction paradigm. Since Why can't conversational AI agents take the initiative?, the Thinking Assistant approach provides an alternative to passivity that doesn't require proactivity in the traditional sense — instead of the AI taking initiative to redirect, it takes initiative to question. This is proactivity in the Socratic mode rather than the directive mode.
The approach leverages LLMs' encoded "world-knowledge" while avoiding the authority-without-accountability problem that since Does polished AI output trick audiences into trusting it?, direct answers carry unearned authority. Questions carry no such risk — they prompt the human to exercise their own judgment.
Source: Decision Support Paper: Thinking Assistants: LLM-Based Conversational Assistants that Help Users Think
Related concepts in this collection
-
Can models learn to ask clarifying questions instead of guessing?
Exploring whether large language models can be trained to detect incomplete queries and actively request missing information rather than hallucinating answers or refusing to respond. This matters because conversational agents today remain passive, responding only when prompted.
Thinking Assistants are the decision-support analog of proactive critical thinking
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
thinking assistants that ask reflection questions outperform those that only provide answers — fostering reflection over authority improves human decision-making with AI