Language Understanding and Pragmatics Conversational AI Systems Psychology and Social Cognition

Can dialogue systems track both speakers' beliefs across turns?

Explores whether pragmatic reasoning frameworks can extend beyond single utterances to model how both conversation partners' understanding evolves. This matters because current dialogue systems lack principled ways to represent shared meaning-making.

Note · 2026-04-18 · sourced from Philosophy Subjectivity
Why do AI conversations reliably break down after multiple turns? Why do LLMs fail at understanding what remains unsaid?

The Rational Speech Act (RSA) framework models pragmatic reasoning as recursive social inference between speakers and listeners. But RSA has a fundamental limitation for dialogue: it handles single utterances, not evolving multi-turn conversations. CRSA fixes this by integrating a multi-turn gain function grounded in interactive rate-distortion theory.

The key extension: Both agents have private information. Each produces utterances conditioned on the full dialogue history. The gain function tracks evolving beliefs of both interlocutors — not just one listener inferring one speaker's intent, but bidirectional, progressive convergence of shared understanding.

Demonstrated on: referential games and template-based doctor-patient dialogues (disease diagnosis from symptoms). CRSA captures the progression from partial to shared understanding across turns.

A critical limitation acknowledged: there is no systematic way to model the meaning spaces, which are always application-dependent. And shifting from utterance-level to token-level reasoning (for scaling to real LLMs) may influence pragmatic capabilities — the reasoning granularity problem is unresolved.

This provides the mathematical framework that current LLM dialogue systems lack. Since the fluency gap — llm text is linguistically well-formed but communicatively empty because fluency substitutes for the grounding work that makes communication meaningful, CRSA offers a principled alternative: pragmatic reasoning grounded in information theory rather than next-token prediction. The question is whether token-level LLM generation can implement utterance-level pragmatic optimization.

Since Why do standard alignment methods ignore partner interventions?, CRSA's bidirectional belief tracking is the theoretical complement to the counterfactual invariance approach — one addresses it through reward engineering, the other through information-theoretic architecture.


Source: Philosophy Subjectivity Paper: Collaborative Rational Speech Act

Related concepts in this collection

Concept map
13 direct connections · 115 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

collaborative rational speech acts extend pragmatic reasoning to multi-turn dialogue by modeling evolving beliefs of both interlocutors through rate-distortion theory