Why do speakers deliberately use ambiguous language?
Explores whether ambiguity is a linguistic defect or a strategic tool speakers use for efficiency, politeness, and deniability. Matters because it challenges how we train language systems.
Ambiguity is an intrinsic feature of natural language, not a failure of linguistic precision. Speakers actively exploit it.
Efficiency-clarity tradeoff (Zipf, 1949; Piantadosi et al., 2012): Language under pressure tends toward shorter, more ambiguous forms. The tradeoff is functional — context resolves most ambiguity, so the cost of ambiguity is low while the efficiency gain is high. Fully unambiguous language would be vastly more verbose. Natural language has the right amount of ambiguity for the conditions under which it operates.
Politeness strategies: Indirect speech acts, polite requests, softened refusals — all rely on ambiguity between the literal and intended meaning. "Could you pass the salt?" is technically a question about capability. Its functional role as a request works through plausible ambiguity.
Covert messaging and deniability: Ambiguity allows speakers to send messages while maintaining plausible deniability. Political speech, social pressure, implicit threats — ambiguity is a tool for communicating what cannot be said directly. AMBIENT documents this in its examples of "misleading political claims that are misleading due to ambiguity."
The implication for LLM design: systems trained to "resolve" or "eliminate" ambiguity are being trained against a functional property of human language. The goal is not disambiguation but ambiguity-sensitive processing — knowing when to ask for clarification, when to offer multiple interpretations, when to select contextually.
Since Do standard NLP benchmarks hide LLM ambiguity failures?, systems never learn this sensitivity. They are evaluated on unambiguous cases and produce single interpretations even where multiple are intended.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Do standard NLP benchmarks hide LLM ambiguity failures?
When benchmark creators filter out ambiguous examples before testing, do they accidentally make it impossible to measure whether language models can actually handle ambiguity the way humans do?
why LLMs can't handle this feature
-
Can language models recognize when text is deliberately ambiguous?
Explores whether LLMs can identify and handle multiple valid interpretations in a single phrase—a core human language skill that appears largely absent in current models despite their fluency on standard tasks.
the specific failure metric
-
Why do speakers need to actively calibrate shared reference?
Explores whether using the same words guarantees speakers mean the same thing. Investigates how referential grounding differs across people and what collaborative work is needed to establish true understanding.
calibrating reference is partly managing productive ambiguity
-
Why do readers interpret the same sentence so differently?
How much of annotation disagreement in NLP reflects genuine interpretive multiplicity rather than error? This explores whether social position and moral framing systematically generate competing but equally valid readings.
interpretive multiplicity as the expected state
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
ambiguity is a functional feature of language not a noise to eliminate