Language Understanding and Pragmatics Psychology and Social Cognition Conversational AI Systems

Can disagreement be resolved without either party fully yielding?

Explores whether dialogue can move past winner-take-all debate or forced consensus to genuine mutual adjustment. Matters for AI systems that need to work through real disagreement with users.

Note · 2026-02-21 · sourced from Argumentation
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Dialogue theory distinguishes persuasion dialogue (one party convinces the other), deliberation (collaborative joint decision-making), and negotiation (interest-based compromise). DR-HAI proposes a category these frameworks miss: dialectical reconciliation.

In dialectical reconciliation, two parties hold incompatible positions. The goal is not for one to win (persuasion), nor for them to find a shared solution from the start (deliberation). Instead, both parties modify their positions through the exchange — each adjusts in response to the other's reasoning — until they reach positions that are compatible without being identical.

The practical context is human-AI disagreement. A user holds a position; the AI holds a different one derived from evidence or inference. Neither position is simply wrong. A persuasion model requires one to abandon their position entirely. Deliberation requires they share goals they may not have. Reconciliation enables each to maintain their reasoning while adjusting to incorporate the other's perspective.

This matters for AI system design because the available dialogue models don't serve this case well. Debate-style multi-agent LLMs (ReConcile, MACI) are optimized for convergence on a winner — they produce confident outputs but lose the intermediate positions. Standard conversational AI is optimized for alignment — the AI agrees with or supports the user. Neither handles the case where genuine disagreement needs to be worked through without one party being simply wrong.

Why do language models skip the calibration step? is the grounding parallel — reconciliation requires dynamic grounding processes that LLMs currently avoid in favor of static accommodation. Why do speakers need to actively calibrate shared reference? describes the calibration requirement that reconciliation makes explicit: both parties must understand what the other means before positions can be adjusted.

The failure mode: systems that flatten reconciliation into persuasion — where the AI's position simply wins because it is presented more confidently — produce outcomes that look like agreement but are not.


Source: Argumentation

Related concepts in this collection

Concept map
14 direct connections · 124 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

dialectical reconciliation is a distinct dialogue type that resolves disagreement through mutual adjustment without requiring either party to fully yield