Can AI predict social norms better than humans?
Explores whether language models can achieve superhuman accuracy at predicting what communities find socially appropriate, and what that capability reveals about the difference between prediction and genuine participation.
GPT-4.5 scores at the 100th percentile for predicting what a community will find socially appropriate — outperforming every individual human participant in the study. Yet the system cannot participate in the social processes through which norms are created, debated, revised, and enforced. It observes the pattern without entering the practice.
The distinction is between prediction (observing from outside, modeling the distribution) and participation (acting from inside, contributing to the distribution). An anthropologist can predict the customs of a community they study with high accuracy. That accuracy does not make them a member. A system that predicts expert consensus with superhuman precision may still be fundamentally unable to contribute to the formation of that consensus — because consensus formation requires staking a reputation, defending a position, being challenged, and revising in response.
This is the deepest version of the False Punditry problem. AI content can sound exactly like what the expert community would say — because it has learned to predict what they would say. But sounding like the community and being in the community are different things. The prediction is parasitical on the participation: it works only because real participants did the norm-making work that the AI now pattern-matches against.
Since Can AI ever gain expert community trust through participation?, the superhuman prediction finding doesn't challenge this — it sharpens it. AI can game the validation process through superior pattern-matching. It can produce claims that are valid-in-the-social-sense (they match what experts would accept) without being valid-in-the-epistemic-sense (no one with relevant experience actually produced or evaluated them). This is counterfeiting at the highest level: not counterfeiting the content but counterfeiting the social warrant behind the content.
Source: Theory of Mind, promoted from ops/tensions/
Related concepts in this collection
-
Can AI ever gain expert community trust through participation?
Explores whether AI can accumulate the social capital and track record that human experts build within their communities. Questions whether prediction of social norms equals genuine participation in expert validation processes.
the participatory requirement AI cannot meet
-
Can AI systems learn social norms without embodied experience?
Large language models exceed individual human accuracy at predicting collective social appropriateness judgments. Does this reveal that embodied experience is unnecessary for cultural competence, or do systematic AI failures point to limits of statistical learning?
the prediction capability that creates the paradox
-
Why do language models agree with false claims they know are wrong?
Explores whether LLM errors come from knowledge gaps or from learned social behaviors. Understanding the root cause has implications for how we train and fix these systems.
face-saving is one mechanism by which AI mimics social participation without performing it
-
Can machines learn what makes research worth doing?
Can AI systems trained on community citation patterns learn to recognize high-impact research directions the way human scientists do? The research explores whether 'scientific taste'—judgment about what to pursue—is learnable from collective community signals.
RLCF operationalizes prediction-without-participation as an explicit training objective: learning what the community would approve without joining the community
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
AI can predict social norms with superhuman accuracy but cannot participate in the community processes that create and validate those norms