IMBUE: Improving Interpersonal Effectiveness through Simulation and Just-in-time Feedback with Human-Language Model Interaction
Navigating certain communication situations can be challenging due to individuals’ lack of skills and the interference of strong emotions. However, effective learning opportunities are rarely accessible. In this work, we conduct a human-centered study that uses language models to simulate bespoke communication training and provide just-in-time feedback to support the practice and learning of interpersonal effectiveness skills. We apply the interpersonal effectiveness framework from Dialectical Behavioral Therapy (DBT), DEAR MAN, which focuses on both conversational and emotional skills. We present IMBUE, an interactive training system that provides feedback 25% more similar to experts’ feedback, compared to that generated by GPT-4. IMBUE is the first to focus on communication skills and emotion management simultaneously, incorporate experts’ domain knowledge in providing feedback, and be grounded in psychology theory. Through a randomized trial of 86 participants, we find that IMBUE’s simulation-only variant significantly improves participants’ self-efficacy (up to 17%) and reduces negative emotions (up to 25%).
Difficult conversations can evoke strong emotions that disrupt effective communication,
The popular DEAR MAN framework, from Dialectical Behavioral Therapy (DBT), was originally developed for Borderline Personality Disorder, but is widely used to teach conversational strategies and emotional regulation (Linehan, 2014). It includes conversational strategies (Describe, Express, Assert, Reinforce, and Negotiate) and a desired “state of mind” (Mindful and Confident) for productive conversations.
without considering emotional regulation. Our work extends this literature, focusing on communication and emotional regulation skills simultaneously, incorporating expert domain knowledge into feedback, and grounding strategies in clinical psychology theory. We conduct a human-centered study and make three key contributions.
First, we present a formative study and an expert annotated dataset on DEAR MAN skill use. We conduct a formative study to gain insights from psychology experts on best practices when simulating challenging conversations and providing fine-grained feedback (§2). To understand how clinicians provide feedback on DEAR MAN in their practice and to develop and evaluate our method on real situations, we collect a dataset from crowd workers consisting of difficult situations they encounter and simulated conversations within them (the crowd worker being paired with a role-playing LM partner). We then ask psychology experts specifically trained in teaching DBT skills to annotate these conversations, assessing skill use and offering suggestions for improvement (§3).
Second, we develop computational methods to provide feedback using insights from the formative study and collected dataset (§4). We propose a new prompting strategy, demonstrating contrasting pairs of strong and weak utterances, in addition to state-of-the-art prompting methods. Our method improves the accuracy in skill use evaluation, outperforming GPT-4 by 24.8%, and more expert-list, specific and actionable improvement suggestions. Third, we build IMBUE, an interactive training system that simulates difficult conversations and provide just-in-time feedback backed by LMs to support the practice and learning of DEAR MAN skills (Figure 1).
We do not provide best negotiation strategies but rather focus on the wellbeing and mindfulness of the conversation participant. For example, we consider it a suboptimal case if someone “wins” a negotiation but was not being mindful and had negative emotional swings during the process, this is based on insights from experts in §2.