Psychology and Social Cognition Language Understanding and Pragmatics

Can AI learn social norms better than humans?

Explores whether large language models can predict cultural appropriateness more accurately than individual humans, and what this reveals about how social knowledge is transmitted and learned.

Note · 2026-02-22 · sourced from Theory of Mind
How should researchers navigate LLM reasoning research? What kind of thing is an LLM really?

Hook: GPT-4.5 is better at knowing what's socially appropriate than any individual human. Not some humans — all of them. 100th percentile. But it makes mistakes that every other AI model also makes in the same way.

The finding:

555 everyday scenarios. "How appropriate is it to laugh at a job interview?" "To cry on a bus?" "To read in church?" When asked to predict the average human judgment, GPT-4.5 was more accurate than every single human participant. Replicated with Gemini 2.5 Pro (98.7%), GPT-5 (97.8%), Claude Sonnet 4 (96.0%).

The AI doesn't just know the rules. It knows the collective sense of a culture better than the people living in it.

Why this matters:

The dominant theory in cognitive science says social norms require embodied experience — you learn what's appropriate by living in a culture, reading faces, feeling social consequences. Statistical learning over text shouldn't be enough. But it is. "Sophisticated models of social cognition can emerge from statistical learning over linguistic data alone."

Language turns out to be a "remarkably rich repository for cultural knowledge transmission." Everything humans write — from etiquette guides to Reddit arguments to novels — encodes social norms. The AI has read more of this than any human could experience in a lifetime.

The catch:

All models show "systematic, correlated errors." Not random mistakes — structured blind spots that every AI architecture shares. The same scenarios that trip up GPT-4.5 also trip up Gemini and Claude. This pattern "indicates potential boundaries of pattern-based social understanding."

There are aspects of social norms that don't make it into text. The unwritten rules that communities enforce through glances, silences, and physical presence. The norms that are so obvious nobody bothers to articulate them. These are the correlated blind spots — and they're exactly the norms you most need to get right in practice.

The tension:

The AI is a savant — extraordinary competence in one dimension (predicting collective norms from text) combined with systematic gaps in another (the norms that never get written down). Better than any individual at the average, blind to the specifics that any local participant would catch immediately.

Flat, not targeted — the post-generation consequence. The savant-from-outside pattern has a specific consequence at the level of generated posts: AI output is flat rather than targeted because no social position is occupied. Normal influencer, commentator, and pundit speech online carries implicit position-taking that situates the speaker relative to the audience — speaking as one of us, or for this community, or against that one. The position-taking is what makes the content addressed to someone in particular, rather than written about a topic in general. AI can predict the average appropriate response but cannot occupy a specific social position vis-à-vis a specific community, because it has no community membership to mark. The output is therefore flat — competent on general norm, absent on the position-taking that would make the post legible as speech from someone to someone. Knowing norms from outside and speaking from outside produce the same residue: content that is addressed to no one in particular and therefore cannot perform the community-specific legitimacy that targeted commentary depends on.

Post structure: Hook (the number) → What it means (embodiment challenge) → The catch (correlated errors) → The tension (savant pattern) → What this means for AI deployment in social contexts

Platform: LinkedIn (300-400 words, practical tone) or Medium (longer with theoretical framing)


Source: Theory of Mind

Related concepts in this collection

Concept map
14 direct connections · 105 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the social norm savant — ai knows your culture better than you do but from the outside