Language Understanding and Pragmatics LLM Reasoning and Architecture Psychology and Social Cognition

Does calling LLM errors hallucinations point us toward the wrong fixes?

Explores whether the metaphor of 'hallucination' for LLM errors misdirects our efforts. The terminology we choose shapes which interventions we prioritize and how we conceptualize the underlying problem.

Note · 2026-02-21 · sourced from Linguistics, NLP, NLU
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Post angle: The word "hallucination" for LLM errors is not just imprecise — it's actively misleading in a way that shapes what we try to fix.

Hallucination is a perceptual phenomenon: you perceive something that isn't there. The fix is better perception — better access to ground truth, better verification against sensory experience. If LLMs "hallucinate," the solution is to ground them better: give them access to real-time data, retrieval-augmented generation, external verification.

But this is the wrong frame. LLMs don't perceive. They generate. The process that produces a true statement is identical to the process that produces a false one. Both are statistical pattern completions from training data. There is no internal mechanism that would allow a correctly-grounded output to be distinguished from a fabricated one, because neither is "grounded" in the sense that perception is.

"Confabulation" — the other common term — imports psychology. Confabulation is a memory compensation mechanism: producing plausible narratives to fill gaps in functioning memory, typically associated with neurological conditions. LLMs don't have functioning memory with gaps. They have trained weights that produce outputs.

"Fabrication" is more honest: generating text without grounding in shared context or world experience, where the generative process is the same regardless of output accuracy. This reframes the problem correctly: the issue is not detection of bad outputs from good ones, but the absence of grounding that would make any output verifiable.

The practical difference: "hallucination" points toward better grounding at inference time. "Fabrication" points toward verification systems, calibrated uncertainty, and use case design that doesn't require reliability without verification infrastructure.


Source: Linguistics, NLP, NLU

Related concepts in this collection

Concept map
15 direct connections · 119 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llms are fabricators not hallucinators — why terminology shapes how we fix ai