Computer says “No”: The Case Against Empathetic Conversational AI

Paper · arXiv 2212.10983 · Published December 21, 2022
Psychology Empathy

Emotions are an integral part of human cognition and they guide not only our understanding of the world but also our actions within it. As such, whether we soothe or flame an emotion is not inconsequential. Recent work in conversational AI has focused on responding empathetically to users, validating and soothing their emotions without a real basis. This AI-aided emotional regulation can have negative consequences for users and society, tending towards a one-noted happiness defined as only the absence of “negative” emotions. We argue that we must carefully consider whether and how to respond to users’ emotions.

Empathy, like all emotions, is likely a uniquely human trait and systems that feign it are in effect feigning humanity. The ethical issues surrounding anthropomorphism have been discussed at length and are beyond the scope of this paper (Salles et al., 2020; Bryson, 2010).

empathy requires an ability to both understand and share another’s emotions. As such, responding empathetically assumes that the system is able to correctly identify the emotion, and that it is able to feel the emotion itself.1

Third, even if conversational AI were to correctly identify the user’s emotions, and perform empathy, we should ethically question the motives and outcomes behind such an enterprise. Svikhnushina et al. (2022) put forward a taxonomy of empathetic questions in social dialogues, paying special attention to the role questions play in regulating the interlocutor’s emotions. They argue for the crucial role effective question asking plays in successful chatbots due to the fact that often questions are used to express “empathy” and attentiveness by the speaker. Here we highlight the ethical concerns that arise from questions that are characterised by their emotion-regulation functions targeted at the user’s emotional state.

What happens if it gets it wrong? It depends on the type of mistake: a) The chatbot fails to put into effect a question’s intent, it would be ethically inconsequential; 3b) It amplifies or minimises an inappropriate emotion. This is the problem we will focus on, arguing that emotional regulation has no place in conversational AI and as such empathetic responses are deeply morally problematic.

Regardless of the emotion model one picks, emotions play important roles, both epistemic and conative ones (Curry, 2022). They perform at least three epistemic roles: (1) They signal to the individual experiencing the emotion what she herself values and how she sees the world (e.g., if you envy your colleague’s publications this tells you you value publications and deem yourself similar enough to your colleague that you can compare yourself (Protasi, 2021)); (2) they signal to others how we see the world; and (3) emotional interactions are invaluable sources of information for third-party observers since they tell us what the members of the interaction value. For example, (1) when you grieve, you signal to yourself and anyone observing that you deem to have lost something of value. It is conceivable that you were unaware up to that point that you valued what you lost—this is captured by the saying “you don’t know what you have till it’s gone.” Furthermore, (2) your friends and family may learn something about you by observing your grief. They too may not have known how much something meant for you. Finally, (3) an observer may also learn about the dynamics of grief (whether it is appropriate to express it for example) by observing whether or not your family validates your grief.

Empathy facilitates engagement through the development of social relationships, affection, and familiarity. Furthermore, for Svikhnushina et al. (2022), empathy is required in order to enable chatbots to ask questions with emotion regulation intents. For example, questions may be used to amplify the user’s pride or de-escalate the user’s anger, or frustration.

defining empathy as the “reactions of one individual to the observed experiences of another” (De Carolis et al., 2017) tells us very little about the process by which a human beings, let alone conversational AI, may do this, what we take issue with is what chatbots hope to do with that empathy. In other words, if for the sake or argument, we presume that conversational AI is able to accurately identify our emotions, the issue of how we deploy empathy is of huge ethical relevance.

if we buy Bloom’s argument, then conversational AI should consider not imitating human beings, but becoming agents of rational compassion.

However, our problem is not necessarily with empathy per se, but rather with the explicit functions conversational AI hopes to achieve with it, namely to enhance engagement, to inflate emotions deemed positive, and to soothe emotions deemed negative (e.g., Svikhnushina et al., 2022). Our claim is that we ought to think carefully about the consequences of soothing negative emotions only because they we have a bias against them. Not only is this approach based on a naive understanding of emotions, it fails to recognise the importance of human beings being allowed to experience and express the full spectrum of emotions. One ought to not experience negative emotions because there is nothing to be upset about, not because we have devised an emotional pacifier.

When you talk to a friend they will decide whether to soothe or amplify your emotions based not just on the situation but also on who they deem you to be. If they think you are someone who has a hard time standing up for yourself they will amplify your anger to encourage you to fight for yourself, but if they think you are someone who leans too much on arrogance, they will de-escalate your sense of pride—even if, all things being equal, your pride on that occasion was warranted. Hence, not only would a conversational AI require prior knowledge of the interlocutor in terms of her character, but furthermore it would have to decide what are desirable character traits.

For example, if their nephew did very well in maths when in fact we know their nephew cheated?