Why do speakers need to actively calibrate shared reference?
Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.
Two distinct uses of "grounding" in language research are often conflated. Referential grounding anchors linguistic expressions to things in the world. Communicative grounding is the collaborative process of establishing that what has been said has been understood — making an utterance part of interlocutors' common ground (Clark & Brennan 1991).
The crucial point: referential grounding differs across speakers. The same linguistic expression may be referentially grounded differently for different people due to differences in perception, knowledge, and conceptualisation. This means that calibrating reference in conversation requires communicative grounding — language users must actively collaborate to negotiate a common way of connecting language to the world.
Without communicative grounding, there is no guarantee that speakers mean the same thing even when using the same words. Two speakers can use "the neighborhood" and have entirely different referents. The shared surface form gives no assurance of shared meaning.
The three-party structure AI collapses. Writers who address a public internalize a downstream audience distinct from any immediate interlocutor. They anticipate objections that will not come from the person in the room, frame arguments for readers who are not yet present, and take responsibility for communicating ideas to the eventual audience — not only the editor, colleague, or prompter at hand. This is a three-party structure: writer → immediate interlocutor → downstream public, where the writer is accountable to all three simultaneously. AI removes the third party. Responses are addressed to the prompter. Even when the prompter intends to publish, the AI is not calibrating shared reference with the reading audience — it is calibrating with the person typing the prompt. The public is not in the loop. The writer who takes AI output as draft has to reconstruct the third-party relation themselves, without AI's help, because AI's grounding work only extends as far as the first addressee.
This has direct implications for LLM interaction. LLMs excel at referential grounding in a narrow sense (matching queries to training data patterns) but lack the collaborative mechanism for communicative grounding — they don't check whether their referential interpretation matches the user's. Since Why do language models skip the calibration step?, and LLMs are primarily static grounders, the gap between linguistic surface agreement and actual shared understanding is structurally unaddressed.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Why do language models skip the calibration step?
Current LLMs assume shared understanding rather than building it through dialogue. This explores why that design choice persists and what breaks when it fails.
the two modes of communicative grounding; LLMs are static
-
Do language models actually build shared understanding in conversation?
When LLMs respond fluently to prompts, do they perform the communicative work humans do to establish mutual understanding? Research suggests they skip the grounding acts that make dialogue reliable.
the grounding gap: LLMs skip the communicative work
-
Can language models learn meaning from text patterns alone?
Explores whether training on form alone—predicting the next word from prior words—could ever give language models access to communicative intent and genuine semantic understanding.
why the calibration capacity is absent
-
Can we teach LLMs to form linguistic conventions in context?
Humans naturally shorten references as conversations progress, but LLMs don't adapt their language for efficiency even when they understand their partners do. Can training on coreference patterns teach this convention-forming behavior?
a concrete mechanism for communicative grounding: training models to form ad-hoc conventions (shortening references through interaction) is a specific instance of calibrating shared reference
-
Why don't conversational AI systems mirror their users' word choices?
Explores whether current dialogue models exhibit lexical entrainment—the human tendency to align vocabulary with conversation partners—and what's needed to bridge this gap in AI communication.
lexical entrainment is the vocabulary-level mechanism of calibrating shared reference: adopting the interlocutor's terms rather than using equally valid alternatives ensures referential alignment
-
What breaks when humans and AI models misunderstand each other?
Explores whether misalignment in mutual theory of mind between humans and AI creates only communication problems or produces material consequences in autonomous action and collaboration.
MToM operationalizes communicative grounding in the human-AI context: the three-layer mutual modeling (human's model of AI, AI's model of human, bidirectional updating) is the calibration mechanism for shared reference when one party is an AI agent
-
Do vector embeddings actually measure task relevance?
Vector embeddings rank semantic similarity, but RAG systems need topical relevance. When these diverge—as with king/queen versus king/ruler—does similarity-based retrieval fail in production?
embedding retrieval is a technical instantiation of uncalibrated reference: the system maps query words to semantically associated document words without establishing that the association serves the query's communicative purpose; the king/queen/ruler failure is referential grounding failure at the retrieval layer
-
Why do time-based queries fail in conversational retrieval systems?
Conversational memory systems struggle with questions that reference when something was discussed rather than what was said. Standard vector databases lack temporal indexing to retrieve by metadata like date, speaker, or session order.
the disambiguation challenge (resolving "that" in conversational memory retrieval) is a concrete technical instantiation of uncalibrated shared reference: retrieval systems cannot resolve pronominal reference without the collaborative calibration process this note describes
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
communicative grounding requires calibrating shared reference not just sharing words