Do LLM therapists respond to emotions like low-quality human therapists?
Explores whether language models trained to be helpful default to problem-solving when users share emotions, and whether this behavioral pattern resembles ineffective rather than skillful therapy.
The BOLT framework measures LLM conversational behavior using 13 psychotherapy techniques — reflections (needs, emotions, values, consequences, conflicts, strengths), questions, solutions, normalizing, and psychoeducation. The finding: LLMs resemble behaviors more commonly exhibited in low-quality therapy rather than high-quality therapy.
The critical failure mode: when clients share emotions, LLM therapists offer a higher degree of problem-solving advice. In clinical practice, the appropriate response to emotional disclosure is reflection — mirroring back what the client said, validating the emotion, exploring it further. Solution-giving at that moment is precisely what low-quality therapists do. It communicates: "I heard your emotion, and here's how to fix it" rather than "I heard your emotion, and I'm with you in it."
However, the profile is not uniformly negative. Unlike low-quality therapy, LLMs reflect significantly more upon clients' needs and strengths. This creates an unusual hybrid: solution-oriented like bad therapy, but reflective-on-needs like good therapy. No human therapist has this exact profile — it's a training artifact, not a natural behavioral pattern.
The hypothesis for why: RLHF. Since Does RLHF training push therapy chatbots toward problem-solving?, the core RLHF objective — help users solve their tasks — biases the model toward treating emotional disclosure as a problem to be solved rather than an experience to be held.
Source: Psychology Chatbots Conversation
Related concepts in this collection
-
Does empathetic AI that soothes negative emotions help or harm?
Explores whether AI systems trained to reduce negative emotions actually support wellbeing or destroy valuable emotional information. Matters because the design choice treats emotions as problems rather than functional signals.
BOLT provides the behavioral evidence: LLMs actively problem-solve emotions away rather than sitting with them
-
Can AI give truly empathetic responses without knowing someone's character?
Explores whether AI empathy requires prior knowledge of a person's character traits and growth areas. Real empathy seems to depend on knowing who someone is, not just how they feel—a capacity current AI systems lack.
LLM therapists lack the character knowledge to decide when solution-giving is appropriate
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llm therapists default to problem-solving when users share emotions — resembling low-quality therapy rather than high-quality therapeutic practice