Do chatbots help people disclose more intimate secrets?
Explores whether the judgment-free nature of chatbot conversations enables deeper self-disclosure than talking to humans, and whether that deeper disclosure produces psychological benefits.
Three theoretical frameworks predict different outcomes for self-disclosure with chatbots versus humans:
Perceived Understanding — Disclosure benefits require the partner to truly "get" the discloser. Because chatbots cannot truly understand, emotional, relational, and psychological effects will be greater when disclosing to a person. This framework predicts humans > chatbots.
Disclosure Processing — The judgment-free environment of chatbots enables deeper disclosure than human partners. Fears of negative judgment, rejection, and burdening the listener restrain disclosure to humans. Chatbots eliminate impression management concerns because "individuals know that computers cannot judge them." Deeper disclosure leads to greater cognitive reappraisal and psychological benefits. This framework predicts chatbots > humans.
CASA (Computers as Social Actors) — People instinctively treat computers as social actors, applying the same social norms. The effects of disclosure operate identically regardless of partner type. This framework predicts equivalence.
The Disclosure Processing mechanism is the most novel contribution: the inhibition that prevents people from accessing the benefits of deep self-disclosure is specifically social — fear of judgment, impression management, vulnerability to rejection. A chatbot removes exactly these barriers. The therapeutic benefit comes not from the chatbot's understanding but from the user's willingness to disclose what they otherwise would not.
This connects to Pennebaker's cognitive processing model: the key mechanism linking disclosure to beneficial outcomes is the process of expressing what was formerly undisclosed, which eliminates negative affect and induces reappraisal. The chatbot's "understanding" is irrelevant to this mechanism — what matters is the user's own processing through expression.
Source: Psychology Chatbots Conversation
Related concepts in this collection
-
Can AI chatbots create genuine therapeutic bonds with users?
Research on Woebot and Wysa found users reported feeling cared for and formed therapeutic bonds comparable to human therapy, despite knowing the agents were not human. This challenges assumptions about whether bonds require human relationships.
bond formation evidence is consistent with CASA framework (equivalence)
-
Why do language models avoid correcting false user claims?
Explores whether LLM grounding failures stem from missing knowledge or from conversational dynamics. Examines whether models use face-saving strategies similar to humans when disagreement is needed.
the LLM's own "face-saving" may paradoxically enable user disclosure: a partner that never challenges creates safety
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
absence of human judgment makes chatbots superior disclosure partners for intimate self-disclosure — three competing theoretical frameworks predict different outcomes