Psychology and Social Cognition Language Understanding and Pragmatics

Can AI predict social norms better than humans?

Explores whether language models can achieve superhuman accuracy at predicting what communities find socially appropriate, and what that capability reveals about the difference between prediction and genuine participation.

Note · 2026-03-31 · sourced from Theory of Mind
Why do LLMs excel at social norms yet fail at theory of mind?

GPT-4.5 scores at the 100th percentile for predicting what a community will find socially appropriate — outperforming every individual human participant in the study. Yet the system cannot participate in the social processes through which norms are created, debated, revised, and enforced. It observes the pattern without entering the practice.

The distinction is between prediction (observing from outside, modeling the distribution) and participation (acting from inside, contributing to the distribution). An anthropologist can predict the customs of a community they study with high accuracy. That accuracy does not make them a member. A system that predicts expert consensus with superhuman precision may still be fundamentally unable to contribute to the formation of that consensus — because consensus formation requires staking a reputation, defending a position, being challenged, and revising in response.

This is the deepest version of the False Punditry problem. AI content can sound exactly like what the expert community would say — because it has learned to predict what they would say. But sounding like the community and being in the community are different things. The prediction is parasitical on the participation: it works only because real participants did the norm-making work that the AI now pattern-matches against.

Since Can AI ever gain expert community trust through participation?, the superhuman prediction finding doesn't challenge this — it sharpens it. AI can game the validation process through superior pattern-matching. It can produce claims that are valid-in-the-social-sense (they match what experts would accept) without being valid-in-the-epistemic-sense (no one with relevant experience actually produced or evaluated them). This is counterfeiting at the highest level: not counterfeiting the content but counterfeiting the social warrant behind the content.


Source: Theory of Mind, promoted from ops/tensions/

Related concepts in this collection

Concept map
13 direct connections · 110 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

AI can predict social norms with superhuman accuracy but cannot participate in the community processes that create and validate those norms