Conversational AI Systems Psychology and Social Cognition

Can controlled latent variables make LLM user simulators realistic?

Can session-level and turn-level latent variables steer LLM-based user simulators toward realistic dialogue while maintaining measurable diversity and ground truth labels for training conversational systems?

Note · 2026-05-03 · sourced from Recommenders Conversational
What breaks when specialized AI models reach real users? Why do multi-agent systems fail despite individual capability?

The bottleneck for training conversational recommender systems is conversational data. Real user sessions are expensive to collect, especially before a CRS exists to interact with. LLM-based user simulators offer a way out: an unconstrained dialogue LLM can interact with a CRS in ways resembling real users. But unconstrained simulation lacks the diversity and ground truth needed for reliable evaluation or training.

RecLLM introduces controllability via two layers of latent variables. Session-level control: a single variable v defined at the start of the session conditions the simulator throughout. For example, a user profile ("twelve-year-old boy who enjoys painting and video games") shapes the entire conversation. Turn-level control: distinct variables v_i defined at each turn shape that turn's response. For example, an intent label ("ask for explanation," "express dissatisfaction") shapes one response. Both are translated into text appended to the simulator's input.

Realism — the ideal property — is measurable three ways. Crowdsource workers attempt to distinguish simulated from real sessions. A discriminator model is trained on the same task. Or an ensemble of session-classifying functions (intent classifiers, topic classifiers, sentiment classifiers) measures statistical distribution matching between simulated and real session sets.

Diversity is a necessary condition of realism: simulated sessions must vary across the full functionality space the CRS will encounter. Controllable variables let the simulator hit specific corners of this space deliberately. Ground truth labels — the value of v — attach to each simulated session, enabling supervised training. If the simulator was prompted "you are an angry user," the session is labeled "angry" with high probability.

The methodology generalizes beyond CRS. Controllable user simulation is a way to bootstrap training data for any task where real user data is hard to collect, conditional on the simulator's realism being verifiable. The architectural piece — latent variables that explicitly steer LLM behavior at session and turn level — is a reusable pattern for synthetic-data generation.


Source: Recommenders Conversational

Related concepts in this collection

Concept map
14 direct connections · 121 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

LLM-based user simulators enable synthetic conversational training data — controllability via session-level and turn-level latent variables grounds realism