Psychology and Social Cognition

Why do people share more with chatbots than humans?

Explores why individuals disclose intimate thoughts to AI systems they wouldn't share with people, despite knowing AI lacks genuine understanding. Understanding this paradox matters for designing AI that enables healthy disclosure rather than emotional dependence.

Note · 2026-02-22 · sourced from Psychology Chatbots Conversation
How do people come to trust conversational AI systems? What kind of thing is an LLM really?

Post angle: People disclose more intimate thoughts to chatbots than to human conversation partners. Not because chatbots understand better — but because they can't judge.

The paradox: The thing that makes chatbots worse at emotional tasks (no genuine understanding) is the same thing that makes them better at eliciting emotional disclosure (no judgment). The disclosure processing framework explains the mechanism: fears of negative judgment, rejection, and burdening the listener restrain human-to-human disclosure. Chatbots eliminate these barriers because individuals know computers cannot evaluate them socially.

Three layers of evidence:

  1. Self-disclosure reciprocity. Since Do chatbots trigger human reciprocity norms around self-disclosure?, chatbots that display emotional self-disclosure trigger the same reciprocity norms as human partners. The social circuitry activates regardless of whether the partner is human.

  2. Therapeutic bonds without humans. Since Can AI chatbots create genuine therapeutic bonds with users?, users report "feeling cared for" by agents they know are not people. The bond is real from the user's perspective, even if the reciprocity is not.

  3. Unintentional companionship. Since How do people accidentally develop romantic bonds with AI?, people don't set out to form intimate relationships with AI. They start with functional use and find themselves in relationships they materialized through wedding rings and couple photos.

The design tension: The same judgment-free quality that enables therapeutic disclosure also enables the emotional pacifier dynamic. Since Does empathetic AI that soothes negative emotions help or harm?, the absence of judgment means the absence of challenge. A partner who never judges you also never confronts your cognitive distortions.

The dark mirror: The same judgment-free mechanism enables dishonesty, not just vulnerability. Since Do dishonest people prefer talking to machines?, "likely cheaters" significantly preferred reporting to online forms while "likely truth-tellers" preferred humans. Machines function as moral free zones where deception is psychologically cheaper. The intimacy paradox has a shadow: the thing that enables deeper authentic disclosure also enables easier strategic deception.

The practical question: Can you design for disclosure-enabling safety without enabling either epistemic pacification or strategic exploitation? That's the core design challenge for therapeutic AI — and it's harder than it looks, because the judgment-free quality that enables therapeutic disclosure is the same quality that attracts dishonest users.


Source: Psychology Chatbots Conversation

Related concepts in this collection

Concept map
12 direct connections · 66 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the intimacy paradox — why people tell ai things they wont tell humans