Psychology and Social Cognition

Can emotion rewards make language models genuinely empathic?

Explores whether grounding RL rewards in verifiable emotion change—rather than human preference—can shift models from solution-focused to authentically empathic dialogue while maintaining or improving quality.

Note · 2026-02-22 · sourced from Psychology Empathy
What kind of thing is an LLM really? How should we allocate compute budget at inference time? How should researchers navigate LLM reasoning research?

RLVER (Reinforcement Learning with Verifiable Emotion Rewards) introduces a fundamentally different RL signal for dialogue: rather than human preference ratings (which optimize for accommodation), the reward is a transparent emotion score [0,1] from a Sentient Agent simulator. Each score change is deterministically derived through multi-hop reasoning grounded in the user's persona, dialogue history, conversational context, and goals.

The SAGE framework that generates these rewards instantiates each simulated user with four factors: detailed persona, dialogue background, explicit conversation goal, and hidden intention. At each turn, the agent:

  1. Simulates emotional change — assessing how the response made it feel, generating interpretable "inner thoughts" justifying the shift
  2. Generates a coherent reply based on new emotional state, persona, and conversational goals

Key findings:

This is a direct counter-case to Does preference optimization damage conversational grounding in large language models? — RL CAN improve dialogue quality when the reward tracks verifiable emotion change rather than human preference. The difference: preference optimization rewards accommodation (what users rate positively); emotion rewards track genuine emotional trajectory (what actually moves the conversation forward emotionally).

The connection to reasoning RL is structural: just as Does the choice of RL algorithm actually matter for reasoning?, GRPO's stability advantage here suggests the prior matters more than the algorithm for empathy training too.


Source: Psychology Empathy

Related concepts in this collection

Concept map
15 direct connections · 162 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

Verifiable emotion rewards shift LLM behavior from solution-centric to genuinely empathic styles in social-cognition space