Why are presuppositions more persuasive than direct assertions?
Explores why presenting information as shared background rather than as a claim makes it more persuasive to audiences. This matters because it reveals how language structure itself can bypass critical evaluation.
Presuppositions have special persuasive force. This has been theorized in argumentation and pragmatics for decades, but the experimental evidence (Thoma et al. 2023) confirms it causally across multiple trigger types.
The mechanism Sbisà (1999) identified: presuppositions "incidentally urge addressees to extend their (ideological) knowledge to make true the unstated assumptions writers have about what their addressee knows, which leads to greater agreement." In other words, by presenting content as shared background rather than as a direct claim, the speaker bypasses the evaluative stance that direct assertions trigger. Direct assertions invite assessment: "Is this true? Should I believe this?" Presuppositions slip past this gate by presenting their content as already accepted.
The experimental finding: this persuasive advantage is largest when presuppositions convey discourse-new information — information not already in the common ground — largely irrespective of the addressee's ideological involvement. The effect held across additive particles (auch, "too"), iterative particles (wieder, "again"), and factive verbs.
The distinction between persuasion (forming a belief) and accommodation (accepting for the conversation's purposes) matters here. Accommodation does not require full belief adoption — it only requires not objecting. But once accommodated, the presupposed content enters the common ground and can be built on by subsequent discourse. This makes false presuppositions particularly dangerous: a listener need only accommodate once, and the false belief is now available for further elaboration.
Presupposition is the linguistic mechanism of false punditry. AI posts regularly phrase claims to appear obvious — even when the claims are contested — by using presupposition rather than assertion. Instead of stating "X is true," the post treats X as already-agreed background and builds on it. This is the linguistic mechanism by which AI-generated commentary achieves its authoritative tone without performing the warranting work that a direct assertion would require: presupposed content does not invite the "is this true?" evaluation, so the reader accommodates rather than assesses. False punditry at the discourse level is specifically this — claims presented as shared ground are slipped past the warranting gate that claims presented as assertions would have to pass.
Argument success is determined by audience presuppositions, not argument quality. The persuasive force of a claim depends heavily on whether it resonates with what the audience already presupposes — easily-accepted claims are those that slot into existing background assumptions, and even logically weaker arguments can outperform stronger ones when they are better aligned with audience presupposition. This has a specific AI implication: AI cannot know the presuppositions of a downstream audience because it is not addressing that audience. It is addressing the prompter. Aligning an argument to a reading audience's presuppositions requires knowing who that audience is and what they already believe — a social competence AI does not have. So AI-generated argument is structurally mis-targeted at the presupposition level even when its logical content is sound: it cannot deliberately activate the audience's presuppositions because it cannot model them, and it cannot avoid activating the wrong ones for the same reason.
This grounds the LLM grounding failure research: since Why do language models accept false assumptions they know are wrong?, LLMs are not merely failing to correct — they are actively amplifying presupposed content by accepting it into their response, giving it the elevated persuasive force that backgrounded content carries.
Source: Natural Language Inference
Related concepts in this collection
-
Why do language models accept false assumptions they know are wrong?
Explores why LLMs fail to reject false presuppositions embedded in questions even when they possess correct knowledge about the topic. This matters because it reveals a grounding failure distinct from knowledge deficits.
the LLM failure case: models accommodate false presuppositions, effectively lending them persuasive force
-
Why do speakers deliberately use ambiguous language?
Explores whether ambiguity is a linguistic defect or a strategic tool speakers use for efficiency, politeness, and deniability. Matters because it challenges how we train language systems.
presuppositions' persuasive force is another designed property of language, not a defect
-
Why do language models skip the calibration step?
Current LLMs assume shared understanding rather than building it through dialogue. This explores why that design choice persists and what breaks when it fails.
presupposition accommodation is the mechanism by which static grounding propagates unchecked beliefs
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
presuppositions are more persuasive than assertions when they introduce discourse-new information