Language Understanding and Pragmatics

Why do language models sound fluent without grounding?

Explores whether LLM fluency masks the absence of communicative work—the clarifying questions, acknowledgments, and understanding checks that humans perform. Why does skipping these acts make models sound more confident?

Note · 2026-02-21 · sourced from Linguistics, NLP, NLU
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Post angle: The most counterintuitive finding about LLM conversational competence is not that they fail — it's the specific way they fail. LLMs generate 77.5% fewer grounding acts than humans in equivalent contexts. They don't ask clarifying questions. They don't acknowledge understanding. They don't check interpretations. They proceed.

The irony: this absence contributes to the impression of fluency. Clarifying questions interrupt flow. Acknowledgments add friction. Checking understanding is a kind of epistemic humility that confident answers don't perform. A model that never expresses uncertainty, never asks "do you mean X or Y?", never says "just to confirm I understand correctly" — sounds authoritative.

But what sounds like confidence is partly the absence of competence. Human conversational experts ask more questions, acknowledge more, repair more — not because they know less but because they know enough to know when mutual understanding needs to be verified.

The Grounding Gaps finding reveals that preference optimization (RLHF) actively erodes this behavior. Human raters prefer confident, fluent, complete answers over those with clarifying questions. So optimization removes the communicative work — and the model gets better ratings for doing less of what conversation actually requires.

Write about: what we call "fluency" may be partly the absence of communicative accountability. The most fluent response is often the one that presumes you understood it.

The observer-systems dimension: The grounding gap has a deeper epistemological layer visible from the perspective of observer systems theory (Bateson, Luhmann). Since Can AI distinguish which differences actually matter?, AI is not merely skipping communicative work — it is not an observer in the first place. Experts ground their communication through observation: they perceive the state of knowledge, the needs of the audience, and the relevance of their own contribution. This observation is communicative work — it is how the expert decides what to say, what to omit, and what to verify. AI generates responses from prompts without observing any state — of knowledge, of the user, of the audience, or of the context. The 77.5% grounding gap quantifies the absence of communicative acts; the observer-systems framing explains why those acts are absent: the generative process that produces AI output is fundamentally non-observational. Fabrication, in this light, is not just the absence of grounding — it is the consequence of generating without observing.


Source: Linguistics, NLP, NLU; enriched from inbox/Knowledge Custodians.md

Related concepts in this collection

Concept map
23 direct connections · 189 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the grounding gap — what makes llms seem fluent is the absence of communicative work