Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot
identity of a conversation partner, as a human or computer, matters. Previous work has found that the mere perceived identity of the partner as computer or human has profound effects, even when actual identity does not (Fox et al., 2015; Lucas, Gratch, King, & Morency, 2014). Perceived identity is critical to understand, especially from a theoretical perspective, because it gives rise to new processes, expectations of the partner, and effects that do not arise when the partner is always assumed to be human, as in previous work. This could alter disclosure processes and outcomes in fundamental ways. For example, people often avoid disclosing to others out of a fear of negative evaluation. Because chatbots do not think or form judgments on their own, people may feel more comfortable disclosing to a chatbot compared to a person, changing the nature of disclosure and its outcomes (Lucas et al., 2014). On the other hand, people assume that chatbots are worse at emotional tasks than humans (Madhavan, Wiegmann, & Lacson, 2006), which may negatively impact emotional disclosure with chatbots.
As the conversational abilities of chatbots quickly improve (Zhang et al., 2018) and public interest grows (Markoff & Mozur, 2015; Romeo, 2016), it is critical to understand the emotional, relational, and psychological outcomes of disclosing to a chatbot. Extant research provides three theoretical frameworks that suggest different potential outcomes. First, a theoretical emphasis on perceived understanding suggests that disclosure will only have a beneficial impact when the partner is believed to have sufficient emotional capacity to truly understand the discloser, which chatbots inherently cannot. We refer to this as the perceived understanding framework. Second, research on conversational agents and disclosure intimacy, in contrast, suggests that disclosure will be even more beneficial with a chatbot than a human partner, because chatbots encourage more intimate disclosure. We refer to this as the disclosure processing framework. Third, a media equivalency approach suggests that the effects of disclosure operate in the same way for human and chatbot partners. We refer to this as the computers as social actors (CASA) framework.
Perceived understanding framework
According to the theoretical model of perceived understanding (Reis, Lemay, & Finkenauer, 2017), feeling truly understood, or that the partner “‘gets’ [disclosers] in some fundamental way,” brings emotional, relational, and psychological benefits.
Perceived Understanding Hypothesis: Because of increased perceived understanding, emotional, relational, and psychological effects will be greater when disclosing to a person than to a chatbot.
Disclosure processing framework
A perspective we call the disclosure processing framework emphasizes the advantages that non-human partners may provide compared to human partners. This framework suggests that people will disclose more to chatbots and subsequently experience more positive outcomes. Fears of negative judgment commonly prevent individuals from disclosing deeply to other people. Worries about being rejected, judged, or burdening the listener restrain disclosure to other people, obviating potential benefits (Afifi & Guerrero, 2000). Disclosure intimacy, however, may increase when the partner is a computerized agent rather than another person, because individuals know that computers cannot judge them (Lucas et al., 2014). Computerized agents reduce impression management and
The more intimately individuals’ disclosures are to a chatbot, the greater the psychological benefits they may accrue, compared to disclosing less intimately to another person. According to Pennebaker’s (1993) cognitive processing model, a key component of the link between cognitive changes and beneficial outcomes is the process by which disclosing what was formerly undisclosed eliminates negative affect and processing and induces reappraisal.
Disclosure Processing Hypothesis: Due to greater disclosure intimacy and cognitive reappraisal, emotional, relational, and psychological effects will be greater when disclosing to a chatbot than to a person.
CASA framework
The Computers as Social Actors (CASA) framework predicts a third possibility. According to this framework, people instinctively perceive, react to, and interact with computers as they do with other people, without consciously intending to do so (Reeves & Nass, 1996). This tendency is so pervasive that it is a foundational component of theoretical thinking about interactions between humans and computerized agents,
“unlikely that one will be able to establish rules for human-agent/robot-interaction which radically depart from what humans know from and use in their everyday interactions”
Individuals, for instance, are more cooperative towards a computer on the same “team” compared to a computer on a different team
Equivalence Hypothesis: Perceived understanding, disclosure intimacy, and cognitive reappraisal processes from disclosing to a partner will lead to equivalent emotional, relational, and psychological effects between chatbot and person partners.