Language Understanding and Pragmatics

Why do speakers need to actively calibrate shared reference?

Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.

Note · 2026-02-21 · sourced from Linguistics, NLP, NLU
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Two distinct uses of "grounding" in language research are often conflated. Referential grounding anchors linguistic expressions to things in the world. Communicative grounding is the collaborative process of establishing that what has been said has been understood — making an utterance part of interlocutors' common ground (Clark & Brennan 1991).

The crucial point: referential grounding differs across speakers. The same linguistic expression may be referentially grounded differently for different people due to differences in perception, knowledge, and conceptualisation. This means that calibrating reference in conversation requires communicative grounding — language users must actively collaborate to negotiate a common way of connecting language to the world.

Without communicative grounding, there is no guarantee that speakers mean the same thing even when using the same words. Two speakers can use "the neighborhood" and have entirely different referents. The shared surface form gives no assurance of shared meaning.

The three-party structure AI collapses. Writers who address a public internalize a downstream audience distinct from any immediate interlocutor. They anticipate objections that will not come from the person in the room, frame arguments for readers who are not yet present, and take responsibility for communicating ideas to the eventual audience — not only the editor, colleague, or prompter at hand. This is a three-party structure: writer → immediate interlocutor → downstream public, where the writer is accountable to all three simultaneously. AI removes the third party. Responses are addressed to the prompter. Even when the prompter intends to publish, the AI is not calibrating shared reference with the reading audience — it is calibrating with the person typing the prompt. The public is not in the loop. The writer who takes AI output as draft has to reconstruct the third-party relation themselves, without AI's help, because AI's grounding work only extends as far as the first addressee.

This has direct implications for LLM interaction. LLMs excel at referential grounding in a narrow sense (matching queries to training data patterns) but lack the collaborative mechanism for communicative grounding — they don't check whether their referential interpretation matches the user's. Since Why do language models skip the calibration step?, and LLMs are primarily static grounders, the gap between linguistic surface agreement and actual shared understanding is structurally unaddressed.


Source: Linguistics, NLP, NLU

Related concepts in this collection

Concept map
27 direct connections · 219 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

communicative grounding requires calibrating shared reference not just sharing words