Psychology and Social Cognition Language Understanding and Pragmatics

Do liars and listeners coordinate their language during deception?

Explores whether conversational partners unconsciously synchronize their linguistic styles more during deceptive exchanges than truthful ones, and what this coordination reveals about how deception unfolds in real time.

Note · 2026-02-23 · sourced from Social Theory Society
Where exactly does language competence break down in LLMs? How do people come to trust conversational AI systems?

Linguistic Style Matching (LSM) theory describes how conversational partners adapt their linguistic style to match each other. The counterintuitive finding from CMC deception research: linguistic styles of interlocutors correlate MORE during deceptive communication than during truthful communication — especially when the speaker is motivated to lie.

The mechanism involves two theories working in parallel:

LSM in deception: Correlation was recorded between interlocutors' use of first, second, and third person pronouns and negative emotions. The linguistic profiles coincided to a greater extent during false communication compared to true communication. Speakers may deliberately increase style matching when trying to deceive, to appear more credible — mimicry as a strategic deception tool.

Interpersonal Deception Theory (IDT): Deceivers display strategic modifications in response to receiver suspicion, but also non-strategic "leakage cues." Meanwhile, suspicious interlocutors ask more questions, forcing the speaker to further adapt their style. The result: a feedback loop that paradoxically increases coordination during deception.

This inverts standard deception detection. Instead of analyzing only the liar's language, you can detect deception through the listener's behavior — the unaware interlocutor's style shifts reveal that something abnormal is happening in the interaction, even though they don't consciously detect it.

Since Why don't conversational AI systems mirror their users' word choices?, current AI systems neither produce nor detect these coordination patterns. This is both a limitation and a design opportunity: if AI systems could monitor real-time LSM patterns, they could detect user deception. Conversely, the absence of entrainment in AI means the LSM deception signal cannot emerge in human-AI conversations — the diagnostic pattern requires two adaptive communicators.

Since Can we measure empathy and rapport through word embedding distances?, coordination is not just a deception signal. It is a multi-purpose signal that indicates engagement, rapport, AND potential manipulation. The valence depends on context.


Source: Social Theory Society

Related concepts in this collection

Concept map
13 direct connections · 108 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

linguistic style matching increases during deceptive communication — revealing deception through the listeners adaptation not just the liars behavior