Can AI agents communicate efficiently in joint decision problems?
When humans and AI must collaborate to solve optimization problems under asymmetric information, what communication patterns enable effective coordination? Current LLMs struggle with this—why?
Decision-oriented dialogue formalizes a class of tasks where multiple agents must communicate to arrive at a joint decision, with quality jointly rewarded. The key structural feature: each agent starts with different information. The user knows their travel preferences; the AI has a database of flight and hotel prices. Neither can make optimal decisions alone.
The crucial constraint: the large amount of information and combinatorial solution space make it "unnatural and inefficient for assistants to communicate all of their knowledge to users, or vice versa." This rules out the naive solution of full information exchange. Instead, agents must determine what their partners already know AND what information is likely to be decision-relevant, asking clarification questions and making inferences as needed.
The aspiration is a human travel agent model: starting with underspecified desires ("things we'd like to do"), comprehensively exploring multi-day itineraries based on preferences and domain knowledge, iteratively refining based on feedback. Current LLMs "did not perform as well as humans" across all task settings — "suggesting failures in their ability to communicate efficiently and reason in structured real-world optimization problems."
This formalization matters because it names what most AI dialogue is NOT doing. Since Why can't conversational AI agents take the initiative?, decision-oriented dialogue requires the agent to actively structure the information exchange — deciding what to share, what to ask about, and what to infer. Passive response generation is structurally incapable of this.
The connection to grounding is direct. Since Do language models actually build shared understanding in conversation?, decision-oriented dialogue requires building shared understanding of both preferences and options through collaborative exploration — not presuming the user knows what to ask for or that the AI knows what matters.
Related concepts in this collection
-
Why can't conversational AI agents take the initiative?
Explores whether current LLMs lack the structural ability to lead conversations, set goals, or anticipate user needs—and what architectural changes might enable proactive dialogue.
decision-oriented dialogue requires exactly the initiative that passive agents lack
-
Do language models actually build shared understanding in conversation?
When LLMs respond fluently to prompts, do they perform the communicative work humans do to establish mutual understanding? Research suggests they skip the grounding acts that make dialogue reliable.
joint decision-making requires building shared understanding
-
Which clarifying questions actually improve user satisfaction?
Not all clarification helps equally. This explores whether asking users to rephrase their needs works as well as asking targeted questions about specific information gaps.
the type of clarification matters in decision-oriented dialogue
-
Does theory of mind predict who thrives in AI collaboration?
Explores whether perspective-taking ability—the capacity to model another's cognitive state—differentiates humans who benefit most from working with AI, separate from solo problem-solving skill.
synergy framework empirically validates the asymmetric information structure: collaborative ability IS the capacity to navigate information asymmetry, and ToM is the mechanism that predicts who navigates it successfully
-
Can AI guidance reduce anchoring bias better than AI decisions?
When humans and AI collaborate on decisions, does providing interpretive guidance instead of proposed answers reduce both over-trust in machines and abandonment on hard cases?
LTG implements one specific form of joint optimization: the machine reduces information asymmetry by highlighting useful aspects of its input rather than collapsing the joint decision space into a proposal; the human retains decision authority while benefiting from the machine's perceptual capabilities
-
Could proactive dialogue make conversations dramatically more efficient?
Explores whether AI systems that volunteer relevant unrequested information could significantly reduce the back-and-forth turns required in task-oriented conversations, and why this behavior is missing from training data.
proactive information provision directly addresses the efficiency problem of asymmetric information: rather than waiting for the user to ask about each constraint, the agent proactively shares relevant information it holds, collapsing the back-and-forth that makes joint optimization costly
-
When should AI agents ask users instead of just searching?
Explores whether tool-enabled LLMs should probe users for clarification when uncertain, rather than silently chaining tool calls that drift from intent. Examines conversation analysis patterns as a formal alternative.
insert-expansions are the conversational mechanism for managing asymmetric information in real-time: when the agent lacks information the user holds, pre-second and post-first insertions probe for it rather than guessing
-
When should human-agent systems ask for human help?
Explores the timing problem in collaborative AI systems: since there's no objective metric for optimal interruption, how can we design deferral mechanisms that know when to involve humans without constant disruption or silent failures?
Magentic-UI's co-planning implements the joint optimization of decision-oriented dialogue at the task execution level: the six mechanisms provide concrete interaction patterns for managing asymmetric information during collaborative work
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
decision-oriented dialogue formalizes human-AI collaboration as joint optimization under asymmetric information where full information sharing is impractical