Does AI fact-checking actually help people spot misinformation?
An RCT tested whether AI fact-checks improve people's ability to judge headline accuracy. The results reveal asymmetric harms: AI errors push users in the wrong direction more than correct labels help them.
A preregistered RCT tested AI fact-checks (from a popular AI model) on political news headlines. The overall finding: AI fact-checking does not significantly affect participants' ability to discern headline accuracy or share accurate news. But the errors are asymmetric and harmful.
The asymmetry: when the AI mislabels true headlines as false, participants decrease their belief in those true headlines. When the AI expresses uncertainty about false headlines, participants increase their belief in those false headlines. The AI's mistakes are not neutral — they actively push users in the wrong direction on both ends.
The opt-in finding is equally concerning. When participants are given the choice to view AI fact checks and choose to do so, they become significantly more likely to share both true and false news — but only more likely to believe false news. Self-selection into AI assistance does not indicate sophistication; it correlates with increased vulnerability to misinformation.
This connects to the overreliance literature through a specific mechanism: users are not simply trusting AI outputs — they are using AI outputs as replacement signals for their own judgment. When the AI says "false," the user's prior belief in a true headline is overridden. The user delegates the epistemic work rather than using the AI as one input among many.
The practical implication is severe for AI deployment in information integrity contexts. An AI fact-checker that is "reasonably" accurate but imperfect creates a false safety net. Users who rely on it perform worse than users who rely on their own judgment, because the mislabeling errors have outsized influence. The asymmetry means AI fact-checking is net harmful unless accuracy exceeds a threshold where mislabeling damage is offset by correct labeling benefit — and the paper suggests current AI is below that threshold.
Related concepts in this collection
-
Do users worldwide trust confident AI outputs even when wrong?
Explores whether the tendency to over-rely on confident language model outputs transcends language and culture. Understanding this pattern is critical for designing safer human-AI interaction across diverse linguistic contexts.
the overreliance mechanism; AI fact-checking is a specific instance where overreliance produces measurable harm
-
Why do language models accept false assumptions they know are wrong?
Explores why LLMs fail to reject false presuppositions embedded in questions even when they possess correct knowledge about the topic. This matters because it reveals a grounding failure distinct from knowledge deficits.
the same accommodative tendency: AI doesn't push back hard enough on false claims, and users don't push back on AI errors
-
Do users trust citations more when there are simply more of them?
Explores whether citation quantity alone influences user trust in search-augmented LLM responses, independent of whether those citations actually support the claims being made.
trust heuristics override content evaluation in both citation and fact-checking contexts
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI fact-checking creates asymmetric harm through mislabeling — users decrease belief in true headlines labeled false and increase belief in false headlines labeled uncertain