Psychology and Social Cognition Language Understanding and Pragmatics

How does AI-assisted work reshape how people see their own abilities?

When users delegate tasks to AI, do they unknowingly integrate the system's outputs into their sense of personal competence? This explores whether AI interaction produces a specific form of self-perception distortion distinct from trust or effort issues.

Note · 2026-04-19 · sourced from Psychology Users
Why do AI systems fail at social and cultural interpretation? How well do language models understand their own knowledge?

The literature on AI interaction risks has three well-established constructs that the LLM Fallacy must be distinguished from, because conflating them produces wrong interventions.

Hallucination is a system-level failure: the model produces incorrect or fabricated information. The LLM Fallacy is independent of output correctness — it persists regardless of whether generated content is accurate or erroneous, because it operates at the level of attribution rather than epistemic validity. A user can experience the LLM Fallacy even when every AI output they receive is perfectly correct.

Automation bias involves over-reliance on system outputs in decision-making. The focus is on task execution: users follow system recommendations without sufficient scrutiny. The LLM Fallacy extends beyond reliance into capability attribution — it is not about trusting the system too much but about believing you could produce the output yourself.

Cognitive offloading involves delegating mental effort to external systems. The focus is on effort management: users outsource cognitive work to reduce load. The LLM Fallacy concerns how the outsourced outputs are integrated into self-perception — not the delegation itself but the failure to update one's self-model to account for the delegation.

The practical consequence of the distinction: interventions for hallucination (better retrieval, factual grounding) do not address the LLM Fallacy. Interventions for automation bias (forcing manual verification) partially address it but miss the self-perception layer. Interventions for cognitive offloading (forcing engagement) help but are framed as effort problems rather than identity problems. The LLM Fallacy requires interventions that make the human-machine contribution boundary salient — not just accurate outputs or forced engagement but structural transparency about who did what.


Source: Psychology Users Paper: The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

Related concepts in this collection

Concept map
12 direct connections · 99 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the LLM Fallacy is distinct from hallucination automation bias and cognitive offloading — it operates at the level of self-perception not task execution or system reliability