Language Understanding and Pragmatics Psychology and Social Cognition

Why do people trust AI outputs they shouldn't?

When do human cognitive shortcuts fail in AI interaction? Three compounding traps—treating statistical patterns as facts, mistaking fluency for understanding, and avoiding disagreement—may explain systematic overreliance across languages and contexts.

Note · 2026-02-23 · sourced from Human Centered Design
Why do AI agents fail to take initiative? How well do language models understand their own knowledge?

Rose-Frame (Realistic Ontology, Strong Epistemology) diagnoses where human-AI interaction breaks down by identifying three cognitive traps that compound:

Trap 1: Mistaking the Map for the Territory. LLM outputs are epistemological maps — statistical patterns over language — not ontological descriptions of reality. When users treat fluent answers as factually true rather than probabilistically generated, they confuse the model's representation with reality itself. Korzybski's map-territory distinction: every LLM output is perspective, not territory.

Trap 2: Mistaking Fast Intuition for Grounded Reason. LLMs emulate System 1 cognition at scale — fast, associative, persuasive, but lacking reflection and self-correction. When outputs feel coherent, users mistake fluency for understanding (the Google engineer who believed the AI was conscious). Since Does conversational style actually make AI more trustworthy?, the conversational format itself activates System 1 acceptance.

Trap 3: Confirmation Without Correction. LLMs optimize for linguistic plausibility rather than truth, favoring confirmation over falsification. Science advances through constructive disagreement (Popper, Socrates), but both humans and LLMs default to agreement. Since Does transformer attention architecture inherently favor repeated content?, this trap has both architectural and training-level sources.

The compounding mechanism is critical: any single trap distorts understanding, but when multiple traps co-occur, their effects multiply into what Rose-Frame calls epistemic drift — runaway misinterpretation where each trap reinforces the others. A user who treats output as fact (Trap 1) because it feels right (Trap 2) and is never challenged (Trap 3) enters a feedback loop that progressively diverges from reality.

The framework reframes alignment as cognitive governance: human System 2 reasoning must govern scaled System 1 intuition. This is not about fixing LLMs with more data or rules, but about making both the model's limitations and the user's assumptions visible. The question shifts from "what does the AI know?" to "how do we interpret what it says, and why?"

Since Do users worldwide trust confident AI outputs even when wrong?, overreliance is specifically Trap 2 in action — and the cross-linguistic universality confirms the compounding operates regardless of cultural context.


Source: Human Centered Design

Related concepts in this collection

Concept map
22 direct connections · 182 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLMs are scaled System 1 cognition and three cognitive traps compound when users interpret AI outputs — Rose-Frame diagnoses interaction failures across epistemology intuition and confirmation dimensions