Chatbot vs. Human: The Impact of Responsive Conversational Features on Users’ Responses to Chat Advisors
Responsiveness, in the form of backchanneling cues, is a promising conversational feature that has been positively linked to organizational and relational outcomes in prior research on human-human (Davis & Perkowitz, 1979; De Ruyter & Wetzels, 2000) and human-robot interactions (Birnbaum et al., 2016) but has hardly been studied in chatbots, as shown in a recent systematic review by Van Pinxteren et al. (2020). We thus consider it important to examine the effects of chat agents’ responsiveness on user responses. In three online vignette experiments, we aim to answer the research question: “To what extent do agent type (chatbot vs. human) and responsiveness influence users’ responses to chat advisors?” Additionally, we look at the underlying mechanisms from three levels relating to different aspects of the interaction (i.e., the interaction in general, the dialogic nature, and the content of the conversation).
In line with the Technology Acceptance Model, a prominent model to explain users’ acceptance and usage of emerging technologies (Venkatesh, 2000), attitude, perceived enjoyment, and satisfaction are considered to be antecedents of user acceptance in terms of intention to use the way of communication with the organization
We use the Media Are Social Actors (MASA) paradigm as an overarching theoretical framework for examining the impact of social cues on user responses to media (Lombard & Xu, 2021). The MASA paradigm extends the Computers Are Social Actors (CASA) paradigm, which states that people apply social rules to their interactions with computers (Nass & Moon, 2000) by considering a medium’s social cues and the social signals these elicit to users as crucial for activating user responses.
considering a medium’s social cues and the social signals these elicit to users as crucial for activating user responses. Lombard and Xu adopt Fiore et al.’s definition of social cues as “features salient to observers because of their potential as channels of useful information” (2013, p. 2). In contrast, social signals refer to the interpretation of a sender medium’s social cues by the receiver (Fiore et al., 2013). Examples of a medium’s social cues include gestures, motion, and language use. These can send out social signals of social identity, interactivity, and responsiveness to users (Lombard & Xu, 2021). This research focuses on cues signaling identity (agent type: human vs. chatbot) and responsiveness (i.e., the use or nonuse of verbal backchanneling cues).
Thus, in this context, people might prefer communicating with a human because they consider them more flexible, adaptable, and sympathetic than a machine—in our case, a chatbot. Previous studies found that machine-like cues hinder positive agent evaluations, while human-like cues promote positive assessments; for example, users rated human agents (vs. chatbots) higher regarding expected likability and social presence (Spence et al., 2014) and social attraction (Lew & Walther, 2023). Given theory and prior research, we expect participants to have a more positive attitude toward a human advisor.
Contrary to the hypotheses, neither human identity cues nor responsive behavior positively affected user responses. The zero effects of responsiveness are striking, given the significant responsiveness effects on the manipulation check. Moderation analyses did not yield significant results. The sample size was slightly below the target size due to our exclusion criterion. As we had based our power considerations on a small interaction effect that Go and Sundar (2019) found in a study where participants directly interacted with chat agents, the effect sizes in our vignette design could be even smaller.