Why do embedding contexts confuse LLM entailment predictions?
Can language models distinguish between contexts that preserve versus cancel entailments? The study explores whether LLMs systematically fail to apply the semantic rules governing presupposition triggers and non-factive verbs.
"Simple Linguistic Inferences of LLMs" targets inferences humans find trivial — grammatically-specified entailments ("You've eaten all my apples" entails "Someone ate something"), evidential adverbs of uncertainty ("allegedly" cancels the entailment of the clause), and monotonicity entailments (specific→general). LLMs show moderate-to-low performance on all three.
But the more revealing finding is what happens when the premise is embedded in grammatical contexts. Two types of embedding contexts should have opposite effects:
- Presupposition triggers (factive verbs: "realized that", "regret that"; temporal clauses: "before X"): embedding under these should not change the original entailment relations — the premise's entailments are preserved because presuppositions project through these contexts.
- Non-factive verbs (believe, imagine, suspect, feel): embedding under these should cancel entailments — "I suspect a balloon hit a light post" no longer entails "something hit a light post."
LLMs cannot make this discrimination. ChatGPT in regular prompting mode treats both presupposition triggers and non-factives as hints toward entailment. In chain-of-thought mode, it treats both as hints against entailment. The embedding context overwhelms the semantics of the embedded content, acting as a "blind" that masks the relevant inferential relationships.
This is a different kind of failure from general reasoning difficulty — these are structural failures where syntactic packaging overrides semantic content. The model responds to the embedding verb (factive vs. non-factive) as a surface cue rather than computing its effect on the entailment relation. This is precisely the pattern Can models pass tests while missing the actual grammar? predicts: surface cues substituting for structural analysis.
The persistence across multiple prompts and LLMs confirms this is systematic, not incidental — "a systematic issue" in the paper's words.
Source: Natural Language Inference
Related concepts in this collection
-
Can models pass tests while missing the actual grammar?
Do language models succeed on grammatical benchmarks by learning surface patterns rather than structural rules? This matters because correct outputs may hide reliance on shallow heuristics that fail on novel structures.
same mechanism: surface context cues substituting for structural computation
-
Does LLM grammatical performance decline with structural complexity?
This explores whether LLMs fail uniformly at grammar or whether their failures follow a predictable pattern tied to input complexity. Understanding the relationship matters for deciding when LLM annotations are reliable.
embedding contexts add structural complexity; this is another specific complexity type that causes systematic failure
-
Why does ChatGPT fail at implicit discourse relations?
ChatGPT excels when discourse connectives are present but drops to 24% accuracy without them. What does this gap reveal about how LLMs actually process meaning and logical relationships?
parallel structure: surface markers (connectives, embedding verbs) override deeper semantic computation
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
presupposition triggers and non-factive verbs are embedding blinds that systematically miscalibrate llm entailment predictions