Why do people share more openly with machines than humans?
Does the absence of social goals in human-machine communication explain why people disclose sensitive information more readily to chatbots? Understanding this mechanism could reshape how we design conversational AI.
Communication is a goal-driven process. In interpersonal communication, people pursue primary goals (the task) alongside multiple secondary goals: avoiding face threats, maintaining relationships, managing impressions, protecting the other person's feelings. These secondary goals are premised on the target having inner experience — emotions, social judgments, well-being.
Machines lack these capacities (Gray et al., 2007). Because machines lack experiential inner states, secondary goals related to those capacities — face threats, relationship maintenance, impression management — should be activated less frequently during human-machine communication. The result: a simpler goal structure with fewer competing demands on message production.
The evidence is consistent. Participants disclosed more sensitive information with greater detail to a computer interviewer than a human one (Pickard & Roster, 2020). A chatbot designed for small talk induced deep self-disclosure over 3 weeks of use — participants explicitly cited the "nonjudgmental or feelingless nature" of the chatbot (Lee et al., 2020). This connects directly to Do chatbots help people disclose more intimate secrets? — the mechanism is goal suppression, not just perceived safety.
However, HMC is not simply interpersonal communication minus social goals. Novel secondary goals emerge:
- Understandability — concern about whether the machine can parse your intent. Users of Replika reported limitations in conversational capabilities and worried about being understood (Muresan & Pohl, 2019).
- Information protection — digital machines are high in recordability. Disclosure triggers privacy concerns absent in ephemeral human conversation.
The practical predictions: compared to interpersonal communication, HMC produces (a) higher directness, (b) lower politeness, (c) fewer temporal and spatial constraints, (d) deeper disclosure of sensitive information but narrower disclosure when privacy concerns dominate. People of lower cognitive complexity may actually prefer the simpler goal structure of HMC over human communication.
Since Why do people share more with chatbots than humans?, this provides the mechanism. It's not that people trust AI more — it's that the goal structure is fundamentally simpler. The cognitive load of managing someone else's feelings, face, and relationship is absent.
Source: Design Frameworks
Related concepts in this collection
-
Do chatbots help people disclose more intimate secrets?
Explores whether the judgment-free nature of chatbot conversations enables deeper self-disclosure than talking to humans, and whether that deeper disclosure produces psychological benefits.
the disclosure advantage has a goal-theoretic explanation
-
Why do people share more with chatbots than humans?
Explores why individuals disclose intimate thoughts to AI systems they wouldn't share with people, despite knowing AI lacks genuine understanding. Understanding this paradox matters for designing AI that enables healthy disclosure rather than emotional dependence.
writing angle that this mechanism supports
-
Do chatbots trigger human reciprocity norms around self-disclosure?
Explores whether chatbots can activate the same social reciprocity dynamics observed in human conversation—specifically, whether emotional openness from a bot prompts deeper disclosure from users.
disclosure norms persist even with simpler goal structures
-
Can opening politeness patterns predict whether conversations will turn hostile?
Do pragmatic politeness features in first exchanges—hedging, greetings, indirectness—reliably signal whether a conversation will later derail into personal attacks? Understanding early linguistic markers could help identify and prevent online hostility.
politeness is a secondary goal that may be suppressed in HMC
-
Why do language models sound fluent without grounding?
Explores whether LLM fluency masks the absence of communicative work—the clarifying questions, acknowledgments, and understanding checks that humans perform. Why does skipping these acts make models sound more confident?
HMC's simpler goal structure may explain why the grounding gap is less costly than expected: when secondary social goals (face-saving, relationship maintenance) are suppressed, the communicative work those goals demand becomes unnecessary; the 77.5% grounding act reduction may be appropriate for HMC's reduced goal complexity rather than a pure deficit
-
Do language models actually build shared understanding in conversation?
When LLMs respond fluently to prompts, do they perform the communicative work humans do to establish mutual understanding? Research suggests they skip the grounding acts that make dialogue reliable.
HMC's novel understandability goal creates a paradox: humans worry about being understood by machines (a grounding concern) but suppress the social grounding behaviors (politeness, face-management) that would build shared understanding in human-human communication; the common ground problem in HMC has a different shape than in interpersonal communication
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
human-machine communication produces simpler goal structures because secondary social goals are suppressed while novel goals emerge