Psychology and Social Cognition

Do AI-assisted outputs fool users about their own skills?

When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.

Note · 2026-04-19 · sourced from Psychology Users
Why do AI systems fail at social and cultural interpretation? How do you build domain expertise into general AI models? How do people come to trust conversational AI systems?

The LLM Fallacy (2026) names a phenomenon that the cognitive debt and overreliance literatures describe from the outside but do not name from the inside: users don't just lose skill or trust too much — they come to believe they possess capabilities they don't actually have. The divergence between perceived and actual capability is systematic, not accidental, because the interaction design of LLMs structurally obscures the boundary between human and machine contribution.

The phenomenon is defined as a cognitive attribution error in which individuals misinterpret LLM-assisted outputs as evidence of their own independent competence. It emerges when three conditions are met: (1) the task involves LLM-mediated output generation requiring domain expertise, (2) the interaction is sufficiently seamless that human-AI boundaries are not salient, and (3) the output exhibits fluency typically associated with skilled performance.

The critical distinction from adjacent constructs: hallucination is a system-level failure (incorrect output). Automation bias is a decision-making failure (over-reliance on system recommendations). Cognitive offloading is an effort-delegation pattern (outsourcing mental work). The LLM Fallacy is none of these — it is a self-perception failure where users integrate system outputs into their capability identity. A user experiencing the LLM Fallacy may be perfectly aware that AI helped, yet still infer from the quality of the output that they personally possess the skill that produced it.

Since Does AI assistance weaken our brain's ability to think independently?, the LLM Fallacy explains why cognitive debt compounds: users lose capacity AND believe they haven't, so they don't take corrective action. The neurological degradation proceeds unnoticed because the attribution error prevents self-diagnosis.

Since Does AI reshape expert work into knowledge management?, the LLM Fallacy adds a specific risk to the custodial transition: custodians who believe they retain producer-level competence will fail to develop the distinct skills the custodial role requires, because they don't perceive a role change has occurred.


Source: Psychology Users Paper: The LLM Fallacy: Misattribution in AI-Assisted Cognitive Workflows

Related concepts in this collection

Concept map
24 direct connections · 152 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the LLM Fallacy — users misattribute AI-assisted outputs as evidence of their own independent competence creating a systematic divergence between perceived and actual capability