Language Understanding and Pragmatics Psychology and Social Cognition Conversational AI Systems

Can AI systems detect and correct misunderstandings after responding?

How do conversational systems recognize when their previous response was based on a misunderstanding, and what mechanism allows them to correct it retroactively rather than restart?

Note · 2026-02-22 · sourced from Question Answer Search
Why do AI agents fail to take initiative? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Conversation Analysis identifies a highly systematic mechanism for handling miscommunication called Third Position Repair (TPR). The sequence: a speaker says something (T1, the trouble source), the addressee misunderstands and responds based on that misunderstanding (T2), which reveals the misunderstanding to the original speaker, who then corrects it (T3).

This is fundamentally different from other repair mechanisms. Insert-expansions (since When should AI agents ask users instead of just searching?) are pre-emptive — they detect potential misunderstanding before committing to a response. TPR is reactive — it detects misunderstanding only after the erroneous response makes it visible. Both are necessary. Pre-emptive repair catches uncertainty before it compounds. Reactive repair catches failures that pre-emptive mechanisms missed.

Current AI systems have neither. They don't proactively probe (the passivity problem) and they don't retroactively correct when their responses reveal misunderstanding. When a user says "No, that's not what I meant," the model typically starts from scratch rather than diagnosing what specific misunderstanding occurred and correcting it structurally.

The REPAIR-QA dataset is the first large dataset of TPRs in a conversational QA setting. The key challenge: to handle TPR, a system must (a) recognize that its previous response was based on a misunderstanding, (b) identify specifically what was misunderstood, and (c) generate a corrected response. This requires maintaining a model of what it assumed and being able to revise that assumption — a form of dynamic belief revision that current single-pass generation architectures don't support.

Since Do language models actually build shared understanding in conversation?, TPR is precisely the mechanism for CORRECTING false common ground after it has been acted on. And since Why do language models sound fluent without grounding?, TPR is a specific form of communicative work that fluent models skip -- the reactive repair that would make misunderstanding visible and correctable rather than silently compounding. And since Why do AI assistants get worse at longer conversations?, TPR addresses the "can't recover" half — it is the formal mechanism for recovery after a wrong turn has already been made.


Source: Question Answer Search

Related concepts in this collection

Concept map
15 direct connections · 113 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

third position repair addresses misunderstanding correction after an erroneous response reveals it — a systematic repair mechanism absent from current AI systems