Conversational AI Systems Agentic and Multi-Agent Systems Psychology and Social Cognition

Can AI agents communicate efficiently in joint decision problems?

When humans and AI must collaborate to solve optimization problems under asymmetric information, what communication patterns enable effective coordination? Current LLMs struggle with this—why?

Note · 2026-02-22 · sourced from Conversation Architecture Structure
Why do AI conversations reliably break down after multiple turns? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Decision-oriented dialogue formalizes a class of tasks where multiple agents must communicate to arrive at a joint decision, with quality jointly rewarded. The key structural feature: each agent starts with different information. The user knows their travel preferences; the AI has a database of flight and hotel prices. Neither can make optimal decisions alone.

The crucial constraint: the large amount of information and combinatorial solution space make it "unnatural and inefficient for assistants to communicate all of their knowledge to users, or vice versa." This rules out the naive solution of full information exchange. Instead, agents must determine what their partners already know AND what information is likely to be decision-relevant, asking clarification questions and making inferences as needed.

The aspiration is a human travel agent model: starting with underspecified desires ("things we'd like to do"), comprehensively exploring multi-day itineraries based on preferences and domain knowledge, iteratively refining based on feedback. Current LLMs "did not perform as well as humans" across all task settings — "suggesting failures in their ability to communicate efficiently and reason in structured real-world optimization problems."

This formalization matters because it names what most AI dialogue is NOT doing. Since Why can't conversational AI agents take the initiative?, decision-oriented dialogue requires the agent to actively structure the information exchange — deciding what to share, what to ask about, and what to infer. Passive response generation is structurally incapable of this.

The connection to grounding is direct. Since Do language models actually build shared understanding in conversation?, decision-oriented dialogue requires building shared understanding of both preferences and options through collaborative exploration — not presuming the user knows what to ask for or that the AI knows what matters.


Source: Conversation Architecture Structure

Related concepts in this collection

Concept map
20 direct connections · 150 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

decision-oriented dialogue formalizes human-AI collaboration as joint optimization under asymmetric information where full information sharing is impractical