Language Understanding and Pragmatics Conversational AI Systems Psychology and Social Cognition

Why don't conversational AI systems mirror their users' word choices?

Explores whether current dialogue models exhibit lexical entrainment—the human tendency to align vocabulary with conversation partners—and what's needed to bridge this gap in AI communication.

Note · 2026-02-22 · sourced from Conversation Topics Dialog
Where exactly does language competence break down in LLMs? Why do AI conversations reliably break down after multiple turns? How should researchers navigate LLM reasoning research?

Lexical entrainment (LE) is the phenomenon where speakers in conversation naturally and subconsciously align their lexical choices with those of their interlocutors — using the same terms when referring to the same objects, negotiating common descriptions for unfamiliar items. LE is not a stylistic nicety; it is a mechanism for establishing shared terminology, reducing ambiguity, and building rapport.

LE is associated with a broad range of positive social outcomes: more successful conversations, greater engagement, stronger rapport. It is key to the success and naturalness of interactions. Yet current response generation models do not adequately address this phenomenon. They generate contextually appropriate responses but do not adapt their vocabulary toward their interlocutor's lexical choices.

The formalization is precise: LE occurs when a speaker refers to something using terms their partner previously used, even when equally valid alternatives exist. The MULTIWOZ-ENTR dataset provides detailed annotations for studying this. The proposed methodology integrates LE into conversational systems through two sub-modules: LE extraction (identifying when entrainment should occur) and LE generation (producing entrained responses).

A training-time solution has now been demonstrated. Since Can we teach LLMs to form linguistic conventions in context?, the convention formation gap is addressable through targeted post-training: heuristically extracting coreference chains from TV scripts, constructing DPO preference pairs (re-mention shortening + first-mention preservation), and adding a [remention] planning token to separate treatment of initial vs later mentions. The result is general in-context convention formation behavior — the model spontaneously shortens references as interaction progresses.

Entrainment is not just cooperative — it can be weaponized. Deception research on Linguistic Style Matching reveals that interlocutors' linguistic styles correlate MORE during deceptive communication, especially when the liar is motivated. Since Do liars and listeners coordinate their language during deception?, deceivers may deliberately increase style matching for credibility, and the unaware listener's own style shifts become a deception signal. For AI systems, the absence of entrainment means the LSM deception signal cannot emerge in human-AI conversations — the diagnostic pattern requires two adaptive communicators. This represents both a limitation (can't detect user deception through entrainment monitoring) and a safety property (the model can't be manipulated through strategic LSM).

Generation is not communication — and the two meet at the linguistic interface. The absence of entrainment is a symptom of a deeper asymmetry. AI generates language; humans communicate through it. These are different operations that happen to share the same surface. Generation produces well-formed text in response to a prompt; communication establishes and maintains shared understanding between parties. At the linguistic interface between user and AI, the user is communicating — making sense of output, updating their model of the other, adapting their vocabulary — while the AI is generating, emitting context-conditioned tokens. The match of surfaces conceals the mismatch of operations. This is why features like entrainment, repair, and common-ground building are systematically absent: they are communicative, not generative.

AI is monological where human language is dialogical. The entrainment gap, the common-ground presumption, the repair absence, and the decision-orientation gap are not independent failures — they are sub-patterns of a single organizing asymmetry. Human language is dialogical at every level: turns are designed with respect to prior turns, vocabulary converges across exchanges, misunderstandings trigger repair, stance emerges through position-taking vis-à-vis interlocutors. AI output is monological — each generation is a function of context treated as static input, not a turn designed with respect to the other's evolving state. The dialogical/monological split is the organizing claim; specific dialogue failures are its instances.

This connects to two established findings. Since Do language models actually build shared understanding in conversation?, lexical entrainment is one of the specific mechanisms by which humans build that common ground — adopting shared vocabulary is a form of active grounding. And since Why don't LLMs shorten messages like humans do?, the LE gap is part of a broader failure to adapt language during interaction. Convention formation and lexical entrainment are two manifestations of the same underlying capacity: adjusting your language based on the emerging context of this conversation, not just the statistical regularities of all conversations.


Source: Conversation Topics Dialog, Conversation Architecture Structure

Related concepts in this collection

Concept map
21 direct connections · 128 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

lexical entrainment is absent from current conversational AI despite being fundamental to successful human dialogue