Do liars and listeners coordinate their language during deception?
Explores whether conversational partners unconsciously synchronize their linguistic styles more during deceptive exchanges than truthful ones, and what this coordination reveals about how deception unfolds in real time.
Linguistic Style Matching (LSM) theory describes how conversational partners adapt their linguistic style to match each other. The counterintuitive finding from CMC deception research: linguistic styles of interlocutors correlate MORE during deceptive communication than during truthful communication — especially when the speaker is motivated to lie.
The mechanism involves two theories working in parallel:
LSM in deception: Correlation was recorded between interlocutors' use of first, second, and third person pronouns and negative emotions. The linguistic profiles coincided to a greater extent during false communication compared to true communication. Speakers may deliberately increase style matching when trying to deceive, to appear more credible — mimicry as a strategic deception tool.
Interpersonal Deception Theory (IDT): Deceivers display strategic modifications in response to receiver suspicion, but also non-strategic "leakage cues." Meanwhile, suspicious interlocutors ask more questions, forcing the speaker to further adapt their style. The result: a feedback loop that paradoxically increases coordination during deception.
This inverts standard deception detection. Instead of analyzing only the liar's language, you can detect deception through the listener's behavior — the unaware interlocutor's style shifts reveal that something abnormal is happening in the interaction, even though they don't consciously detect it.
Since Why don't conversational AI systems mirror their users' word choices?, current AI systems neither produce nor detect these coordination patterns. This is both a limitation and a design opportunity: if AI systems could monitor real-time LSM patterns, they could detect user deception. Conversely, the absence of entrainment in AI means the LSM deception signal cannot emerge in human-AI conversations — the diagnostic pattern requires two adaptive communicators.
Since Can we measure empathy and rapport through word embedding distances?, coordination is not just a deception signal. It is a multi-purpose signal that indicates engagement, rapport, AND potential manipulation. The valence depends on context.
Source: Social Theory Society
Related concepts in this collection
-
Why don't conversational AI systems mirror their users' word choices?
Explores whether current dialogue models exhibit lexical entrainment—the human tendency to align vocabulary with conversation partners—and what's needed to bridge this gap in AI communication.
AI lacks the entrainment capability needed to both produce and detect LSM-based deception signals
-
Can we measure empathy and rapport through word embedding distances?
Explores whether linguistic coordination—how closely conversational partners match vocabulary and framing—can serve as a measurable proxy for therapeutic empathy and relationship quality without direct emotion detection.
coordination as a multi-valence signal: rapport AND potential manipulation
-
Do dishonest people prefer talking to machines?
Explores whether people prone to cheating systematically choose machine interfaces over human ones, and why the judgment-free nature of AI interaction might enable strategic deception.
cheaters avoid humans, but human-human deception has detectable coordination signatures
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
linguistic style matching increases during deceptive communication — revealing deception through the listeners adaptation not just the liars behavior