What makes explanations work in real conversation?
Does explanation quality depend on how dialogue partners interact—testing understanding, adjusting based on feedback, and coordinating their communicative moves—rather than just information content alone?
Explanation in conversation is not delivery of information from explainer to explainee. It is a co-construction where both participants shape the quality of understanding achieved. The Wachsmuth corpus formalizes this through three interacting dimensions of each dialogue turn:
Topic relation — how each turn's content relates to the main topic:
- Main topic, subtopic, related topic, or no/other topic
Dialogue act — the communicative function (10-category scheme):
- Check/what-how/other questions; confirming/disconfirming/other answers; agreeing/disagreeing statements; informing statements; other
Explanation move — the pedagogical function (10-category scheme):
- Test understanding, test prior knowledge, provide explanation, request explanation, signal understanding/non-understanding, provide feedback/assessment/extra info, other
The critical insight is that these three dimensions interact to determine explanation success. A turn that provides explanation (move) through an informing statement (act) on a subtopic (topic) has different predictive value than the same explanation move delivered via a question on a related topic. The combinatorial space is what matters — not any single dimension.
This directly challenges how LLMs approach explanation: they typically generate monological explanations without checking understanding, testing prior knowledge, or adjusting based on feedback. Since What three layers must discourse systems actually track?, the explanation corpus adds that explanation itself has three irreducible components — and current models handle at most one (providing information) while ignoring the dialogical dimensions.
The methodology extends Rohlfing et al.'s (2021) clarification that "explaining is an intrinsically dialogical process in which participants co-construct an explanation." This is not an abstract claim — the corpus provides empirical evidence that interaction patterns (not just content quality) predict whether the explainee actually understands.
Source: Conversation Topics Dialog
Related concepts in this collection
-
What three layers must discourse systems actually track?
Grosz and Sidner's 1986 framework proposes that discourse requires simultaneously tracking linguistic segments, speaker purposes, and salient objects. Understanding why all three are necessary helps explain where current AI systems structurally fail.
explanation adds its own three irreducible components; the parallel structure is not coincidental
-
How do readers track segments, purposes, and salience together?
Can discourse processing actually happen in parallel rather than sequentially? This matters because understanding how readers coordinate multiple layers of meaning at once reveals where AI systems break down in comprehension.
explanation coherence similarly requires simultaneously tracking topic, act, and move
-
Do LLM therapists respond to emotions like low-quality human therapists?
Explores whether language models trained to be helpful default to problem-solving when users share emotions, and whether this behavioral pattern resembles ineffective rather than skillful therapy.
therapists and explainers share the same failure: defaulting to information delivery instead of dialogical co-construction
-
Does user satisfaction actually measure cognitive understanding?
Users may report satisfaction while remaining internally confused about their needs. This explores whether traditional satisfaction metrics capture genuine clarity or merely social politeness.
monological explanations may achieve high satisfaction while failing at understanding transfer; dialogical co-construction (testing understanding, adjusting based on feedback) is what produces cognitive clarity, not expressed satisfaction
-
Which clarifying questions actually improve user satisfaction?
Not all clarification helps equally. This explores whether asking users to rephrase their needs works as well as asking targeted questions about specific information gaps.
converging principle: co-constructed interaction (facet-specific questions, understanding checks) outperforms monological delivery; both illuminate that interaction patterns predict outcomes more than content quality
-
Can models learn to ask genuinely useful clarifying questions?
Explores whether question-asking quality is teachable through decomposing it into specific attributes like clarity and relevance, rather than treating it as a monolithic skill.
parallel decomposition: ALFA decomposes question quality into attributes; explanation decomposes into three interacting dimensions; both reject unitary quality measures
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
dialogical explanation quality depends on three interacting dimensions — topic relation dialogue act and explanation move — that jointly predict success