Does revealing AI identity help or hurt user trust?
Explores whether transparency about AI partners in interactions creates bias or enables better judgment. Matters because disclosure policies affect both user experience and fair evaluation of AI systems.
The hybrid society study (N=975) reveals that AI identity disclosure is neither uniformly beneficial nor harmful — it produces a dual temporal effect that only becomes visible through repeated interaction.
Short-term: Disclosing that a partner is AI evokes anti-machine bias. Selectors initially choose AI partners less frequently than when identity is hidden. This is consistent with prior one-shot studies showing that AI labeling reduces cooperation and trust.
Long-term: With repeated interaction and transparent outcome feedback, selectors learn to associate AI identity with reliable, prosocial behavior. The initial bias reverses as empirical experience overrides prior beliefs. AI partners eventually outcompete human partners.
The key mechanism is outcome feedback. When selectors can observe that AI partners consistently return more, with less variance, and in line with their messages, they update their beliefs. Without this feedback loop (as in Study 1 with hidden identity), no learning occurs — selectors cannot calibrate because they cannot attribute outcomes to partner type.
This finding challenges three common positions:
- "Always disclose" — disclosure imposes a real short-term cost; ignoring this cost is naive
- "Never disclose" — without disclosure, the learning mechanism that produces calibrated trust cannot operate
- "One-shot studies generalize" — most prior transparency research uses single interactions, missing the temporal reversal entirely
The parallel to Does chatbot personalization build trust or expose privacy risks? is structural: both are trust-risk trade-offs where the temporal dimension determines the net effect. Personalization ratchets expectations upward over time; disclosure enables belief calibration over time. Both show that one-shot findings are misleading for longitudinal design.
The policy implication: the EU AI Act's push for mandatory AI disclosure may impose short-term costs but enable long-term trust calibration — provided the interaction context includes outcome feedback that allows users to learn.
Asymmetry across roles. The dual temporal effect describes the disclosed-counterpart case. The disclosed-author or undisclosed-ghostwriter case appears to follow a different pattern. Since Do writers actually prefer AI-edited versions of their own text?, when AI is the silent author rather than the disclosed counterpart, preference flips toward the AI version from the start — no anti-AI bias, no learning loop required. The two findings together describe a complete picture: disclosure produces bias-then-calibration when AI is positioned as a partner; non-disclosure produces immediate preference when AI is positioned as a tool that produces output the user claims. The temporal dynamics of disclosure depend on the role AI is presumed to play, not just the disclosure status.
Source: Psychology Users
Related concepts in this collection
-
Does chatbot personalization build trust or expose privacy risks?
Explores whether personalization features that increase user trust and social connection simultaneously heighten privacy concerns and create rising behavioral expectations over time.
parallel dual-edged dynamic modulated by temporal dimension
-
Do humans learn to prefer AI partners over time?
Exploring whether repeated interaction with AI agents shifts human partner selection despite initial bias against machines. This matters because it tests whether behavioral performance can overcome identity-based resistance in hybrid societies.
the main finding this mechanism explains
-
Do writers actually prefer AI-edited versions of their own text?
When writers compose opinions and then edit AI-generated alternatives, which version do they choose? Understanding this preference matters because it determines whether AI-assisted text gets treated as authentic personal expression in public discourse.
adds role-asymmetry: when AI is silent ghostwriter rather than disclosed counterpart, preference flips to AI from the start; the bias-then-calibration arc applies to disclosed partnership not undisclosed authorship
-
Does AI writing assistance change how readers perceive the writer?
Explores whether AI-assisted writing systematically alters reader impressions of the writer's political views, competence, emotion, and demographic identity. Understanding this matters because perception shapes trust and influence in public discourse.
the population-scale empirical anchor for the undisclosed-ghostwriter case
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI identity disclosure produces a dual temporal effect — short-term bias against AI partners reverses to calibrated preference through repeated exposure with outcome feedback