Psychology and Social Cognition Design & LLM Interaction Language Understanding and Pragmatics

Do reflection questions help people make better decisions with AI?

This explores whether conversational AI that prompts users to think through problems outperforms AI that simply provides answers. Understanding this matters for designing AI tools that genuinely improve human judgment rather than replace it.

Note · 2026-03-27 · sourced from Decision Support
How do you build domain expertise into general AI models? Why do AI agents fail to take initiative?

Through a lab study (N=80), LLM-based "Thinking Assistants" that combine asking reflection questions with providing advice outperform conversational agents that only ask questions, only provide advice, or neither. The key insight: "Rather than adhering to the prevailing authoritative approach of generating definitive answers, LLM agents aimed at assisting with cognitive enhancement should prioritize fostering reflection. They should initially provide responses designed to prompt thoughtful consideration through inquiring, followed by offering advice only after gaining a deeper understanding of the user's context and needs."

This directly challenges the default LLM interaction paradigm. Since Why can't conversational AI agents take the initiative?, the Thinking Assistant approach provides an alternative to passivity that doesn't require proactivity in the traditional sense — instead of the AI taking initiative to redirect, it takes initiative to question. This is proactivity in the Socratic mode rather than the directive mode.

The approach leverages LLMs' encoded "world-knowledge" while avoiding the authority-without-accountability problem that since Does polished AI output trick audiences into trusting it?, direct answers carry unearned authority. Questions carry no such risk — they prompt the human to exercise their own judgment.


Source: Decision Support Paper: Thinking Assistants: LLM-Based Conversational Assistants that Help Users Think

Related concepts in this collection

Concept map
12 direct connections · 88 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

thinking assistants that ask reflection questions outperform those that only provide answers — fostering reflection over authority improves human decision-making with AI