From speaking like a person to being personal: The effects of personalized, regular interactions with conversational agents

Paper · Source
Psychology Chatbots ConversationDesign Frameworks

human-AI interactions (Sundar, 2020). Interactions with these agents may have a shorter-term, transactional nature, for example checking the status of an order with a customer service chatbot, or a longer-term outlook, such as daily discussions with an agent about one’s health.

Earlier research has shown that one-shot interactions with these agents may induce effects similar to conversations with other humans, for example for emotional self-disclosure (Ho et al., 2018). Research also shows that the way in which these agents are designed, such as visual, identity or conversational cues used, can influence user perceptions (Go & Sundar, 2019) and behavior (Bellur & Sundar, 2017; Sundar & Kim, 2019) in one-shot interactions. Knowledge, however, is still scarce about the effects of longer term, regular interactions with these agents. This is a critical knowledge gap in our understanding, as we simply do not know enough about how effects found in one-shot studies hold across multiple interactions, while chatbots are not only designed for short-term purposes, but often also for medium- and longer-term interactions (Nißen et al., 2022). Evidence from the few recent longitudinal studies point to the need for further investigation: With the chatbot Mitsuku, for example, social processes related to relationship formation decreased throughout interactions, likely due to a novelty effect wearing off (Croes & Antheunis, 2020).

adapting to individual responses and response patterns, which can be referred to as personalization and often associated with chatbots designed for medium- and longer-term interactions (Nißen et al., 2022). Each additional interaction between a user and an agent means that the agent learns more about the user (and adapt itself in future interactions) and the user may expect more from the agent (given information provided earlier). Given these dynamics, we investigate the effects of personalization, in line with the vast set of research demonstrating how it can influence a variety of cognitive, affective, and behavioral outcomes

A rich body of research based on the Computers Are Social Actors (CASA) paradigm has established that users tend to consider computers as autonomous sources of information (Nass & Steuer, 1993; Sundar & Nass, 2000) and treat them as independent social actors (Reeves & Nass, 1996). This tendency is seen in recent conversational agent research, which finds support for the CASA framework (e.g., Ho et al., 2018), and in earlier studies showing that users can hold longer-term relationships with technology (Bickmore & Picard, 2005).

Importantly, the capabilities of the agents and the overall experience of users with technology have evolved since CASA was first proposed, making it pressing to update its findings with research in the current environment (Gambino et al., 2020). On the one hand, conversational agents are now easily accessible, on social media or messaging platforms and as apps in smartphones where users spend a substantial share of their days. This ease of access helps users to make these agents part of their daily routines, facilitating repeated interactions. On the other hand, these agents can collect, analyze, and make use of growing amounts of data, given the adoption of AI in communication (Sundar, 2020) – and offer experiences that are much more personalized than for earlier iterations of these agents.

Given changes in the overall capabilities of the technology and in the user experience with these agents, we focus on three sets of outcomes that have been frequently researched in the context of conversational agents, though primarily in one-shot interactions. First, we examine the effects of repeated interactions and personalization on how individuals perceive the agent itself. Second, as research often highlights interactivity in user engagement with technology (Bellur & Sundar, 2017; Go & Sundar, 2019), we focus on perceptions about the interaction. Third, as these agents are integrated into our routines, the information they provide and, ultimately, their influence on behavior must be understood. Therefore, we assess how repeated interactions and personalization influence perceptions about the information provided by the agent and user behavior(al intentions). In the sections that follow, we briefly outline these outcomes,1 and then our expectations for the effects of repeated interactions and personalization on these outcomes

assumption that if these agents are to “assume roles hitherto fulfilled by humans, it is necessary to make their interactions as similar as possible to those of human beings” (Go & Sundar, 2019, p. 304). In this sense, perceived anthropomorphism or humanness are important because they influence “how people expect those agents to behave in the future, and on people’s interpretations of these agents’ behavior in the present” (Epley et al., 2007, p. 864).

Another critical aspect in agent evaluations is the extent to which the user feels she can trust the agent.2

For conversational agents, trust has been further specified as a “sense that technology is able to perform requested tasks, works reliably and predictably, and therefore is dependable for the long term”

In the context of this study, we consider perceptions about the interaction as the way in which the user evaluates the conversation that she had with the agent. To do so we, first, focus on dialogue quality, which assesses the success of message interactivity, defined as the “extent to which the desired objective of interactivity was realized” (under the term perceived dialogue: Sundar et al., 2016, p. 603). Often studied (e.g., Go & Sundar, 2019; Sundar et al., 2016), it encompasses both how much the user felt she was involved in a mutual conversation, and the efficiency in the interaction.

Besides having a relational role (as in the case of Mitsuku), conversational agents are often used for functional purposes, for example information or recommendation provision (Brandtzaeg & Følstad, 2017). As such, in addition to investigating how users perceive the interaction with an agent, we assess how they perceive the information provided by the agent as a result of the interaction (i.e., about the recommendation made). First, we focus on personal relevance, i.e., “the extent to which people view some communicative stimuli being related or applicable to a person and/or situation” (Jensen et al., 2012, p. 854). Earlier research suggests relevance as one of the key drivers of personalization (tailoring) effects (e.g., Jensen et al., 2012; Lustria et al., 2016), also examining it for conversational agents (e.g., as a moderator: Van den Broeck et al., 2019).

Second, even though the information provided may be considered relevant, it is still an open question whether a user may consider it to be credible when coming from an agent. As such, we investigate information credibility perceptions, a multidimensional construct encompassing trustworthiness, believability, accuracy, completeness and (lack of) bias of the information provided, in line with early online research

First, we focus on self-disclosure:

Second, given the potential for these agents to play a functional role in information provision, we focus on recommendation adherence intention, as outlined below.

a user may perceive that the information provided by an agent in a personalized interaction is more relevant and credible.

As such, we propose: H3. A personalized agent will lead to higher levels of perceived (a) personal relevance and (b) information credibility, compared to a non personalized agent.

As such, we expect personalization to have a positive effect on agent perceptions. For anthropomorphism, this may be explained by this sociability afforded in the interactions by personalization. In addition, we expect that trust will be higher for an agent that personalizes the interaction as, we argue, personalization may be considered a sign of social intelligence (i.e., ability to learn from earlier conversations with another entity) and of performance, which, in both cases, have been found to enhance trust in conversational agents (Rheu et al., 2021).

We therefore propose: H1. A personalized agent will lead to higher levels of (a) perceived anthropomorphism and (b) trust, compared to a non-personalized agent.

H2. A personalized agent will lead to higher levels of (a) dialogue quality and (b) perceived privacy risks compared to a non-personalized agent.

H4. A personalized conversational agent will lead to higher levels of (a) intended and actual self-disclosure and (b) recommendation adherence intention, compared to a non-personalized agent.