How do we learn to read AI-generated text critically?
Publics have developed interpretive postures toward journalism, advertising, and scholarship over time. But AI discourse arrived too suddenly for any cultural discount to form, raising questions about how we might develop one.
Every enduring source of discourse in public life carries with it an interpretive posture that publics have developed over time. We know how to read journalism — we understand it is filtered through editorial incentives but we credit its factual claims differently than we credit opinion columns. We know how to read advertising — we treat it as an admitted construction of persuasive appeal, so we apply a discount automatically. We know how to read scholarship, correspondence, testimony, rumor. These postures are cultural achievements, evolved through long experience of each source's characteristic distortions.
AI-generated discourse has no such posture. It arrived too recently, it shifts too quickly in capability, and it cannot be anchored to a specific speaker or institution whose incentives we could learn. We read AI text with a provisional trust calibrated to our confidence in the technology generally — which is an unstable basis, because the technology changes monthly and our impressions of it lag its actual behavior.
This is a structural asymmetry. AI-generated claims circulate at scale without the interpretive discount that publics apply to other high-volume discourse sources. The advertising comparison is instructive: an enormous quantity of advertising text enters public life every day without polluting discourse much, because the cultural posture toward advertising does most of the filtering work. AI does not benefit from this filter, which means its polluting potential is higher than its output-volume alone would predict.
The implication is that the cultural work of developing a posture toward AI-generated discourse is the primary near-term discursive task. Until a stable discount function exists, How does AI writing escape the conversations that govern knowledge? will continue to compound unchecked.
Source: Epistemic Inflation
Related concepts in this collection
-
How does AI writing escape the conversations that govern knowledge?
If knowledge claims normally get filtered and refined through social discourse, what happens when AI generates claims outside that governing process? Why does scale matter here?
the systemic problem this is a cultural-reception angle on
-
Does AI reshape expert work into knowledge management?
As AI generates knowledge at scale, does expert work shift from creating new understanding to curating and validating machine outputs? This matters because curation and creation demand different cognitive skills.
one response to the absent posture is custodial filtering
-
Does AI fact-checking actually help people spot misinformation?
An RCT tested whether AI fact-checks improve people's ability to judge headline accuracy. The results reveal asymmetric harms: AI errors push users in the wrong direction more than correct labels help them.
evidence that ad-hoc interpretive postures (fact-check labels) do not substitute for a cultural discount
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
we lack a cultural position on AI-generated discourse unlike advertising which we already discount