Can disagreement be resolved without either party fully yielding?
Explores whether dialogue can move past winner-take-all debate or forced consensus to genuine mutual adjustment. Matters for AI systems that need to work through real disagreement with users.
Dialogue theory distinguishes persuasion dialogue (one party convinces the other), deliberation (collaborative joint decision-making), and negotiation (interest-based compromise). DR-HAI proposes a category these frameworks miss: dialectical reconciliation.
In dialectical reconciliation, two parties hold incompatible positions. The goal is not for one to win (persuasion), nor for them to find a shared solution from the start (deliberation). Instead, both parties modify their positions through the exchange — each adjusts in response to the other's reasoning — until they reach positions that are compatible without being identical.
The practical context is human-AI disagreement. A user holds a position; the AI holds a different one derived from evidence or inference. Neither position is simply wrong. A persuasion model requires one to abandon their position entirely. Deliberation requires they share goals they may not have. Reconciliation enables each to maintain their reasoning while adjusting to incorporate the other's perspective.
This matters for AI system design because the available dialogue models don't serve this case well. Debate-style multi-agent LLMs (ReConcile, MACI) are optimized for convergence on a winner — they produce confident outputs but lose the intermediate positions. Standard conversational AI is optimized for alignment — the AI agrees with or supports the user. Neither handles the case where genuine disagreement needs to be worked through without one party being simply wrong.
Why do language models skip the calibration step? is the grounding parallel — reconciliation requires dynamic grounding processes that LLMs currently avoid in favor of static accommodation. Why do speakers need to actively calibrate shared reference? describes the calibration requirement that reconciliation makes explicit: both parties must understand what the other means before positions can be adjusted.
The failure mode: systems that flatten reconciliation into persuasion — where the AI's position simply wins because it is presented more confidently — produce outcomes that look like agreement but are not.
Source: Argumentation
Related concepts in this collection
-
Why do language models skip the calibration step?
Current LLMs assume shared understanding rather than building it through dialogue. This explores why that design choice persists and what breaks when it fails.
reconciliation requires dynamic grounding; LLMs default to static
-
Why do speakers need to actively calibrate shared reference?
Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.
mutual adjustment requires mutual understanding of positions first
-
Why do multi-agent LLM systems converge without real debate?
When multiple AI agents reason together, do they genuinely deliberate or just accommodate each other's views? Research into clinical reasoning systems reveals how often agents reach agreement without substantive disagreement.
silent agreement is reconciliation collapsed into false consensus
-
Can AI systems detect when they've genuinely reached agreement?
When multiple AI agents debate, they often converge without actually deliberating. Can a dedicated agent reliably identify true agreement versus false consensus, and would that improve debate outcomes?
agreement detection provides the architectural mechanism reconciliation needs: verifying that convergence reflects genuine mutual adjustment rather than one party yielding
-
Why do standard dialogue systems fail at tracking negotiation agreement?
Standard dialogue state tracking monitors one user's goals, but negotiation requires tracking both parties' evolving positions simultaneously. Why is this bilateral requirement fundamentally different, and what makes existing models insufficient?
reconciliation requires bilateral commitment tracking: both parties' evolving positions must be monitored simultaneously, not just one side's state; standard DST's single-user assumption is structurally incompatible with mutual adjustment
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
dialectical reconciliation is a distinct dialogue type that resolves disagreement through mutual adjustment without requiring either party to fully yield