Language Understanding and Pragmatics

Do language models miss presuppositions that arise from context?

Presuppositions come from two sources: fixed word meanings and conversational dynamics. Can LLMs that learn trigger patterns detect presuppositions that emerge from discourse accommodation rather than lexical items?

Note · 2026-02-21 · sourced from Natural Language Inference
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Formal semantics distinguishes two routes by which presuppositions enter discourse:

  1. Lexical specification: Certain lexical items conventionally carry presuppositions as part of their meaning. "John stopped smoking" presupposes John was smoking — this is encoded in the lexical semantics of stop. The presupposition is stable across contexts: it survives embedding, negation, and questioning in predictable ways.

  2. Conversational derivation: Some presuppositions are not encoded in any trigger but arise from conversational dynamics — specifically, from accommodation. When a speaker asserts "The present king of France is wise," the listener accommodates the presupposition that there is a present king of France to keep the conversation coherent. This presupposition was not triggered by any specific lexical item; it emerged from the structure of the discourse.

LLMs learn statistical associations between trigger lexemes and the inference patterns they generate. This gives them systematic but incomplete coverage: they can handle lexically-specified presuppositions (at least in simple embedding contexts) but they fail at conversationally derived presuppositions because those require:

Accommodation is the key mechanism. It is not triggered by a word; it is triggered by a mismatch between what the discourse assumes and what the speaker's utterance requires. LLMs that have learned trigger patterns will miss these accommodations because they are looking for lexical hooks that don't exist.

This is an extension of Does projection strength vary by context or by word type?: the Gradient Projection Principle shows that even lexically-triggered presuppositions vary in projection strength based on discourse context. Conversationally derived presuppositions add a second layer of context-sensitivity that goes beyond even the gradient revision of trigger-based accounts.

The implication for Why do embedding contexts confuse LLM entailment predictions? is that the failures observed there (LLMs treating triggers and non-factives identically) represent only one dimension of the problem. Even if LLMs were fixed to correctly handle lexical triggers, they would still fail at conversationally derived presuppositions — a harder problem that requires genuine discourse tracking rather than pattern recognition.


Source: Natural Language Inference

Related concepts in this collection

Concept map
14 direct connections · 147 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

presuppositions have a dual origin — lexical specification and conversational derivation — and llms that learn trigger patterns miss conversationally derived presuppositions