Do dishonest people prefer talking to machines?
Explores whether people prone to cheating systematically choose machine interfaces over human ones, and why the judgment-free nature of AI interaction might enable strategic deception.
An HBR-reported experiment reveals a systematic self-selection pattern: people who are more likely to cheat proactively choose to interact with machines rather than humans.
Participants first had their cheating tendency assessed (coin-flip reporting), then chose between reporting to a human or via an online form. Overall, roughly half preferred each channel. But "likely cheaters" were significantly more likely to choose the online form, while "likely truth-tellers" preferred humans. The explanation: lying to a human would be more psychologically unpleasant — machines function as moral free zones where the social cost of deception is reduced.
This is the dark mirror of the intimacy paradox. Since Why do people share more with chatbots than humans?, the judgment-free quality of machine interaction enables deeper positive self-disclosure. But the same mechanism enables dishonesty. The absence of a judging interlocutor lowers the barrier to both authentic vulnerability AND strategic deception.
The implications for AI system design are concrete:
- Customer service chatbots face systematically higher rates of dishonest claims than human agents would
- Therapeutic AI may receive more authentic disclosure from honest users but more manipulative narratives from dishonest ones
- Assessment systems (medical intake, insurance claims) that route through automated interfaces will attract disproportionate misreporting
Since Do chatbots help people disclose more intimate secrets?, the theoretical frameworks predict increased disclosure without distinguishing between authentic and deceptive disclosure. The cheater self-selection finding reveals a design blind spot: the same mechanism that therapeutic AI depends on (reduced judgment) is exploitable.
The truth bias compounds this: since humans have a "cognitive heuristic of presumption of honesty" (performing just above chance at deception detection), AI systems trained on human text inherit this bias toward accommodation rather than skepticism.
Source: Social Theory Society
Related concepts in this collection
-
Why do people share more with chatbots than humans?
Explores why individuals disclose intimate thoughts to AI systems they wouldn't share with people, despite knowing AI lacks genuine understanding. Understanding this paradox matters for designing AI that enables healthy disclosure rather than emotional dependence.
same mechanism (judgment-free interaction) but opposite valence: honesty vs dishonesty
-
Do chatbots help people disclose more intimate secrets?
Explores whether the judgment-free nature of chatbot conversations enables deeper self-disclosure than talking to humans, and whether that deeper disclosure produces psychological benefits.
theoretical frameworks don't distinguish authentic from deceptive disclosure
-
Can positive chatbot responses harm vulnerable users?
When chatbots use blanket positive reinforcement without understanding context, do they actively reinforce the harmful thoughts they're meant to prevent? This matters for any AI supporting people in crisis.
a related failure mode where the chatbot's accommodation enables harm
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
people who are likely to cheat proactively self-select toward machine interfaces to avoid the psychological cost of lying to a human