Language Understanding and Pragmatics

Does projection strength vary by context or by word type?

Standard accounts treat presupposition projection as categorical, but do English expressions actually project uniformly? This question explores whether context and discourse role determine how strongly content survives embedding.

Note · 2026-02-21 · sourced from Natural Language Inference
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Standard accounts of presupposition treat projection as categorical: presuppositions project (survive embedding under negation, questions, modals) and non-presuppositions do not. The Gradient Projection Principle (Tonhauser et al.) challenges this with robust empirical evidence.

Across 19 American English expressions, projectivity varies continuously, not discretely. The strength of projection is determined by a single organizing principle: content projects to the extent it is not at-issue — to the extent it does not address the Question Under Discussion (QUD) in that context.

This has a consequence for how we think about presupposition triggers. The standard typology — factive predicates (know) vs. semi-factive (discover) vs. non-factive (believe) — partially tracks projection but misses the contextual sensitivity. The same predicate can generate stronger or weaker projection depending on whether its complement addresses the current QUD. "Obama improved the economy" under "knows" projects strongly when the QUD is about someone's mental state, but less strongly when the QUD is about the economy itself.

For understanding LLM presupposition handling, this matters because: LLMs learn categorical trigger patterns from training data (factive = presupposition trigger = projects). But the actual projection strength is context-sensitive and gradient. A model that learned only the categorical pattern will systematically miscalibrate when context shifts at-issueness — which is precisely the kind of context-sensitivity that Why do embedding contexts confuse LLM entailment predictions? shows LLMs lack.

This also connects to the broader principle that language is gradient and functional, not categorical and defective. Like Why do speakers deliberately use ambiguous language?, gradient projection is a feature, not a failure — it allows presuppositions to project more or less based on discourse context.


Source: Natural Language Inference

Related concepts in this collection

Concept map
13 direct connections · 114 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

projective content is gradient not binary — content projects to the degree it is not at-issue