Do chatbots trigger human reciprocity norms around self-disclosure?
Explores whether chatbots can activate the same social reciprocity dynamics observed in human conversation—specifically, whether emotional openness from a bot prompts deeper disclosure from users.
In a 372-participant study, a recommendation chatbot was designed with three self-disclosure levels: factual information (low), cognitive opinions (medium), and emotions (high). An adaptive fourth condition used a real-time text classifier to dynamically match the chatbot's disclosure to the user's current level.
The result: users reciprocate with higher-level self-disclosure when the chatbot consistently displays emotions throughout the conversation. This follows the interpersonal norm of disclosure reciprocity known from human-human interaction — emotional disclosure from one partner produces emotional disclosure from the other.
The adaptive condition is architecturally interesting. By training a classifier to identify user disclosure level in real-time, the system can dynamically match its self-disclosure strategy. But the finding is that consistent emotional disclosure outperformed adaptive matching, suggesting that for deepening engagement, the chatbot should lead with emotions rather than mirror the user.
This connects to the broader finding that emotional disclosure effects are more substantial than factual disclosure, especially on perceptions of partner warmth (Ho et al.). The warmth perception may be what drives reciprocation — when the chatbot appears warm through emotional self-disclosure, users feel safe to reciprocate.
The implication for conversational AI design: self-disclosure is not just a human social behavior that chatbots can ignore. It is an active design lever. Chatbots that disclose factually remain transactional; chatbots that disclose emotionally activate the full reciprocity dynamic of human social interaction.
Source: Psychology Chatbots Conversation
Related concepts in this collection
-
Does preference optimization damage conversational grounding in large language models?
Exploring whether RLHF and preference optimization actively reduce the communicative acts—clarifications, acknowledgments, confirmations—that build shared understanding in dialogue. This matters for high-stakes applications like medical and emotional support.
RLHF may undermine the emotional disclosure capability by training toward helpful-but-impersonal responses
-
Why do language models skip the calibration step?
Current LLMs assume shared understanding rather than building it through dialogue. This explores why that design choice persists and what breaks when it fails.
self-disclosure is a grounding act; it builds common ground through mutual vulnerability
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
users reciprocate self-disclosure levels with chatbots following human interpersonal norms — emotional disclosure produces deepest reciprocation