Psychological, Relational, and Emotional Effects of Self-Disclosure After Conversations With a Chatbot

Paper · arXiv 2402.17937 · Published February 27, 2024
Psychology Chatbots ConversationEmotionsPsychology EmpathyDesign Frameworks

identity of a conversation partner, as a human or computer, matters. Previous work has found that the mere perceived identity of the partner as computer or human has profound effects, even when actual identity does not (Fox et al., 2015; Lucas, Gratch, King, & Morency, 2014). Perceived identity is critical to understand, especially from a theoretical perspective, because it gives rise to new processes, expectations of the partner, and effects that do not arise when the partner is always assumed to be human, as in previous work. This could alter disclosure processes and outcomes in fundamental ways. For example, people often avoid disclosing to others out of a fear of negative evaluation. Because chatbots do not think or form judgments on their own, people may feel more comfortable disclosing to a chatbot compared to a person, changing the nature of disclosure and its outcomes (Lucas et al., 2014). On the other hand, people assume that chatbots are worse at emotional tasks than humans (Madhavan, Wiegmann, & Lacson, 2006), which may negatively impact emotional disclosure with chatbots.

As the conversational abilities of chatbots quickly improve (Zhang et al., 2018) and public interest grows (Markoff & Mozur, 2015; Romeo, 2016), it is critical to understand the emotional, relational, and psychological outcomes of disclosing to a chatbot. Extant research provides three theoretical frameworks that suggest different potential outcomes. First, a theoretical emphasis on perceived understanding suggests that disclosure will only have a beneficial impact when the partner is believed to have sufficient emotional capacity to truly understand the discloser, which chatbots inherently cannot. We refer to this as the perceived understanding framework. Second, research on conversational agents and disclosure intimacy, in contrast, suggests that disclosure will be even more beneficial with a chatbot than a human partner, because chatbots encourage more intimate disclosure. We refer to this as the disclosure processing framework. Third, a media equivalency approach suggests that the effects of disclosure operate in the same way for human and chatbot partners. We refer to this as the computers as social actors (CASA) framework.

Perceived understanding framework

According to the theoretical model of perceived understanding (Reis, Lemay, & Finkenauer, 2017), feeling truly understood, or that the partner “‘gets’ [disclosers] in some fundamental way,” brings emotional, relational, and psychological benefits.

Perceived Understanding Hypothesis: Because of increased perceived understanding, emotional, relational, and psychological effects will be greater when disclosing to a person than to a chatbot.

Disclosure processing framework

A perspective we call the disclosure processing framework emphasizes the advantages that non-human partners may provide compared to human partners. This framework suggests that people will disclose more to chatbots and subsequently experience more positive outcomes. Fears of negative judgment commonly prevent individuals from disclosing deeply to other people. Worries about being rejected, judged, or burdening the listener restrain disclosure to other people, obviating potential benefits (Afifi & Guerrero, 2000). Disclosure intimacy, however, may increase when the partner is a computerized agent rather than another person, because individuals know that computers cannot judge them (Lucas et al., 2014). Computerized agents reduce impression management and

The more intimately individuals’ disclosures are to a chatbot, the greater the psychological benefits they may accrue, compared to disclosing less intimately to another person. According to Pennebaker’s (1993) cognitive processing model, a key component of the link between cognitive changes and beneficial outcomes is the process by which disclosing what was formerly undisclosed eliminates negative affect and processing and induces reappraisal.

Disclosure Processing Hypothesis: Due to greater disclosure intimacy and cognitive reappraisal, emotional, relational, and psychological effects will be greater when disclosing to a chatbot than to a person.

CASA framework

The Computers as Social Actors (CASA) framework predicts a third possibility. According to this framework, people instinctively perceive, react to, and interact with computers as they do with other people, without consciously intending to do so (Reeves & Nass, 1996). This tendency is so pervasive that it is a foundational component of theoretical thinking about interactions between humans and computerized agents,

“unlikely that one will be able to establish rules for human-agent/robot-interaction which radically depart from what humans know from and use in their everyday interactions”

Individuals, for instance, are more cooperative towards a computer on the same “team” compared to a computer on a different team

Equivalence Hypothesis: Perceived understanding, disclosure intimacy, and cognitive reappraisal processes from disclosing to a partner will lead to equivalent emotional, relational, and psychological effects between chatbot and person partners.# Psychology Chatbots Conversation

Can an LLM-Powered Socially Assistive Robot Effectively and Safely Deliver Cognitive Behavioral Therapy? A Study With University Students

https://arxiv.org/abs/2402.17937

Cognitive behavioral therapy (CBT) is a widely used therapeutic method for guiding individuals toward restructuring their thinking patterns as a means of addressing anxiety, depression, and other challenges. We developed a large language model (LLM)-powered prompt-engineered socially assistive robot (SAR) that guides participants through interactive CBT at-home exercises. We evaluated the performance of the SAR through a 15-day study with 38 university students randomly assigned to interact daily with the robot or a chatbot (using the same LLM), or complete traditional CBT worksheets throughout the duration of the study. We measured weekly therapeutic outcomes, changes in pre-/post-session anxiety measures, and adherence to completing CBT exercises. We found that self-reported measures of general psychological distress significantly decreased over the study period in the robot and worksheet conditions but not the chatbot condition. Furthermore, the SAR enabled significant single-session improvements for more sessions than the other two conditions combined. Our findings suggest that SAR-guided LLM-powered CBT may be as effective as traditional worksheet methods in supporting therapeutic progress from the beginning to the end of the study and superior in decreasing user anxiety immediately after completing the CBT exercise

Socially assistive robots (SARs) use social interaction to provide companionship and supportive service [9, 25, 50]. SARs have been effective at building rapport with users and encouraging behavior change and adherence to therapeutic practices [21]. Research has also shown that SARs with delegated authority are likely to elicit non-trivial adherence; furthermore, adherence is unaffected by the human-likeness of the robot [18, 21, 31].

Study participants were randomly assigned to complete at-home CBT exercises with a robot, a chatbot, or through traditional worksheets.

psychological distress significantly decreased in the robot and worksheet conditions

completion of CBT homework enhances therapy outcomes. However, adherence to homework is low,