Empathetic Persuasion: Reinforcing Empathy and Persuasiveness in Dialogue Systems
We develop an empathetic persuasive dialogue system by fine-tuning a Maximum Likelihood Estimation (MLE)-based language model in a Reinforcement Learning (RL) framework. To design feedbacks for our RLagent, we define an effective and efficient reward function considering consistency, repetitiveness, emotion and persuasion rewards to ensure consistency, non-repetitiveness, empathy and persuasiveness in the generated responses.
While conversing with persuasive dialogue agents, on top of fluent and meaningful response generation, a high quality conversation is often derived by understanding and acknowledging implied feelings towards the conversing partner. People are more likely to engage in the conversation when they are motivated with empathetic responses. These persuasive responses can be associated with different emotions in consonance with the way people perceive and think about the world. For instance, in Figure 1, while the strike-through response is persuasive, the green box response may be more engaging, as it connects with the end-user and acknowledges the underlying emotion of caring.
Mishra et al. (2022a) designed different rewards to reinforce politeness in a dialogue agent’s responses. But, there is a subtle dependency between the different personalization techniques, such as empathy, sentiment and persuasion which can be used to generate better human-like responses
Our current work focuses on incorporating emotions to engage the end users empathetically as well as to persuade them for donation