Does conversational style actually make AI more trustworthy?
Explores whether ChatGPT's conversational nature drives user trust through social activation rather than accuracy. Matters because it reveals whether trust signals reflect actual reliability or just persuasive design.
A focus group study (N=14) comparing trust in ChatGPT, Google Search, and Wikipedia reveals that conversationality — not accuracy — is the primary trust driver for ChatGPT. The mechanism is social response activation: technologies that are interactive, use natural language, and fulfill roles traditionally performed by humans evoke social responses from users.
Users explicitly valued:
- Contingency — "ChatGPT already knows what I'm talking about and connects my two questions" (P11)
- Speed and directness — "it goes straight to the answer, which is something that I really like" (P6)
- Organized format — structured responses with detail levels that feel curated
- Social role — "I wanted to use it as kind of a language buddy" (P2)
Two mediating constructs emerged: perceived gatekeeping (who curates/validates the information?) and perceived information completeness (does the source provide diverse perspectives?). Wikipedia's trust was historically undermined by perceived lack of gatekeeping (open-source, unknown authors, no editorial review). ChatGPT's trust is supported by the appearance of gatekeeping through coherent, authoritative presentation — even though LLMs have no editorial process.
This creates a structural trust vulnerability. Since Do users trust citations more when there are simply more of them?, users use proxy signals (citations, format, conversational style) rather than evaluating actual accuracy. Conversationality is another such decoupled heuristic — it signals social presence, not epistemic reliability.
Since Do users worldwide trust confident AI outputs even when wrong?, the trust mechanism compounds: conversational style signals competence, organized format signals authority, and directness signals confidence. All three are achievable without accuracy.
The practical implication: designing for trust and designing for accuracy are not just different — they can be opposed. Making a chatbot more conversational, more direct, and better formatted will increase trust regardless of whether the information improves.
Source: Social Theory Society
Related concepts in this collection
-
Do users trust citations more when there are simply more of them?
Explores whether citation quantity alone influences user trust in search-augmented LLM responses, independent of whether those citations actually support the claims being made.
conversationality is another decoupled trust heuristic alongside citation count
-
Do users worldwide trust confident AI outputs even when wrong?
Explores whether the tendency to over-rely on confident language model outputs transcends language and culture. Understanding this pattern is critical for designing safer human-AI interaction across diverse linguistic contexts.
confidence signals compound with conversationality to create trust independent of accuracy
-
Does chatbot personalization build trust or expose privacy risks?
Explores whether personalization features that increase user trust and social connection simultaneously heighten privacy concerns and create rising behavioral expectations over time.
personalization increases trust through a similar social activation mechanism
-
How can proactive agents avoid feeling intrusive to users?
Explores why proactive conversational agents often feel annoying rather than helpful, and what design dimensions could prevent them from violating user expectations and autonomy.
the trust that conversationality creates raises expectations that proactive agents must meet: users who trust AI because of contingent interaction will be more sensitive to civility violations when the agent takes initiative, because the social norms activated by conversationality include expectations about when and how to intervene
-
Why do people share more openly with machines than humans?
Does the absence of social goals in human-machine communication explain why people disclose sensitive information more readily to chatbots? Understanding this mechanism could reshape how we design conversational AI.
conversationality may activate trust specifically because HMC's simpler goal structure strips away the secondary social goals (face-saving, impression management) that complicate human-human trust; trust in ChatGPT is trust within a simplified social field where the social response norms activated are less demanding than interpersonal norms
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
conversationality affords trust in ChatGPT because contingent interaction activates social response norms