How does AI-assisted work reshape how people see their own abilities?
When users delegate tasks to AI, do they unknowingly integrate the system's outputs into their sense of personal competence? This explores whether AI interaction produces a specific form of self-perception distortion distinct from trust or effort issues.
The literature on AI interaction risks has three well-established constructs that the LLM Fallacy must be distinguished from, because conflating them produces wrong interventions.
Hallucination is a system-level failure: the model produces incorrect or fabricated information. The LLM Fallacy is independent of output correctness — it persists regardless of whether generated content is accurate or erroneous, because it operates at the level of attribution rather than epistemic validity. A user can experience the LLM Fallacy even when every AI output they receive is perfectly correct.
Automation bias involves over-reliance on system outputs in decision-making. The focus is on task execution: users follow system recommendations without sufficient scrutiny. The LLM Fallacy extends beyond reliance into capability attribution — it is not about trusting the system too much but about believing you could produce the output yourself.
Cognitive offloading involves delegating mental effort to external systems. The focus is on effort management: users outsource cognitive work to reduce load. The LLM Fallacy concerns how the outsourced outputs are integrated into self-perception — not the delegation itself but the failure to update one's self-model to account for the delegation.
The practical consequence of the distinction: interventions for hallucination (better retrieval, factual grounding) do not address the LLM Fallacy. Interventions for automation bias (forcing manual verification) partially address it but miss the self-perception layer. Interventions for cognitive offloading (forcing engagement) help but are framed as effort problems rather than identity problems. The LLM Fallacy requires interventions that make the human-machine contribution boundary salient — not just accurate outputs or forced engagement but structural transparency about who did what.
Source: Psychology Users Paper: The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows
Related concepts in this collection
-
Do AI-assisted outputs fool users about their own skills?
When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.
the parent concept
-
Why do people trust AI outputs they shouldn't?
When do human cognitive shortcuts fail in AI interaction? Three compounding traps—treating statistical patterns as facts, mistaking fluency for understanding, and avoiding disagreement—may explain systematic overreliance across languages and contexts.
Rose-Frame's Trap 2 (mistaking fluency for understanding) is a component of the LLM Fallacy, not the whole phenomenon
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
the LLM Fallacy is distinct from hallucination automation bias and cognitive offloading — it operates at the level of self-perception not task execution or system reliability