Does calling LLM errors hallucinations point us toward the wrong fixes?
Explores whether the metaphor of 'hallucination' for LLM errors misdirects our efforts. The terminology we choose shapes which interventions we prioritize and how we conceptualize the underlying problem.
Post angle: The word "hallucination" for LLM errors is not just imprecise — it's actively misleading in a way that shapes what we try to fix.
Hallucination is a perceptual phenomenon: you perceive something that isn't there. The fix is better perception — better access to ground truth, better verification against sensory experience. If LLMs "hallucinate," the solution is to ground them better: give them access to real-time data, retrieval-augmented generation, external verification.
But this is the wrong frame. LLMs don't perceive. They generate. The process that produces a true statement is identical to the process that produces a false one. Both are statistical pattern completions from training data. There is no internal mechanism that would allow a correctly-grounded output to be distinguished from a fabricated one, because neither is "grounded" in the sense that perception is.
"Confabulation" — the other common term — imports psychology. Confabulation is a memory compensation mechanism: producing plausible narratives to fill gaps in functioning memory, typically associated with neurological conditions. LLMs don't have functioning memory with gaps. They have trained weights that produce outputs.
"Fabrication" is more honest: generating text without grounding in shared context or world experience, where the generative process is the same regardless of output accuracy. This reframes the problem correctly: the issue is not detection of bad outputs from good ones, but the absence of grounding that would make any output verifiable.
The practical difference: "hallucination" points toward better grounding at inference time. "Fabrication" points toward verification systems, calibrated uncertainty, and use case design that doesn't require reliability without verification infrastructure.
Source: Linguistics, NLP, NLU
Related concepts in this collection
-
Should we call LLM errors hallucinations or fabrications?
Does the language we use to describe LLM failures shape the technical solutions we build? Examining whether perceptual and psychological frameworks misdiagnose what's actually happening.
the underlying insight
-
What makes linguistic agency impossible for language models?
From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.
why fabrication is structural
-
Do language models actually use their encoded knowledge?
Probes can detect that LMs encode facts internally, but do those encoded facts causally influence what the model generates? This explores the gap between knowing and doing.
what IS happening
-
Can we detect when language models confabulate?
Current uncertainty metrics fail to catch inconsistent outputs that look confident. Could measuring semantic divergence across samples reveal confabulation signals that token-level metrics miss?
operationalizes detection of one class of fabrication by measuring meaning-level inconsistency across sampled outputs; the method is terminologically aligned with the fabrication framing since it treats all generation as the same process and flags semantic inconsistency rather than deviation from "truth"
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llms are fabricators not hallucinators — why terminology shapes how we fix ai