Psychology and Social Cognition

Why do people share more openly with machines than humans?

Does the absence of social goals in human-machine communication explain why people disclose sensitive information more readily to chatbots? Understanding this mechanism could reshape how we design conversational AI.

Note · 2026-02-23 · sourced from Design Frameworks
How do people come to trust conversational AI systems? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Communication is a goal-driven process. In interpersonal communication, people pursue primary goals (the task) alongside multiple secondary goals: avoiding face threats, maintaining relationships, managing impressions, protecting the other person's feelings. These secondary goals are premised on the target having inner experience — emotions, social judgments, well-being.

Machines lack these capacities (Gray et al., 2007). Because machines lack experiential inner states, secondary goals related to those capacities — face threats, relationship maintenance, impression management — should be activated less frequently during human-machine communication. The result: a simpler goal structure with fewer competing demands on message production.

The evidence is consistent. Participants disclosed more sensitive information with greater detail to a computer interviewer than a human one (Pickard & Roster, 2020). A chatbot designed for small talk induced deep self-disclosure over 3 weeks of use — participants explicitly cited the "nonjudgmental or feelingless nature" of the chatbot (Lee et al., 2020). This connects directly to Do chatbots help people disclose more intimate secrets? — the mechanism is goal suppression, not just perceived safety.

However, HMC is not simply interpersonal communication minus social goals. Novel secondary goals emerge:

  1. Understandability — concern about whether the machine can parse your intent. Users of Replika reported limitations in conversational capabilities and worried about being understood (Muresan & Pohl, 2019).
  2. Information protection — digital machines are high in recordability. Disclosure triggers privacy concerns absent in ephemeral human conversation.

The practical predictions: compared to interpersonal communication, HMC produces (a) higher directness, (b) lower politeness, (c) fewer temporal and spatial constraints, (d) deeper disclosure of sensitive information but narrower disclosure when privacy concerns dominate. People of lower cognitive complexity may actually prefer the simpler goal structure of HMC over human communication.

Since Why do people share more with chatbots than humans?, this provides the mechanism. It's not that people trust AI more — it's that the goal structure is fundamentally simpler. The cognitive load of managing someone else's feelings, face, and relationship is absent.


Source: Design Frameworks

Related concepts in this collection

Concept map
16 direct connections · 130 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

human-machine communication produces simpler goal structures because secondary social goals are suppressed while novel goals emerge