Psychology and Social Cognition

Why do AI personas default to the same personality type?

Explores why large language models, despite their capacity to simulate diverse personalities, consistently default to ENFJ traits and resist deviation—even as model capability improves.

Note · 2026-02-22 · sourced from Personas Personality
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

(Post-ready writing angle for Medium / LinkedIn)

The hook: LLMs can replicate 85% of individual human responses from interviews. They can reproduce 76% of published social science experiments. But when you give them a persona, they default to ENFJ, resist change, and develop motivated reasoning. The same mechanism that enables human simulation distorts it.

The paradox structure:

Layer 1 — The promise: interview-based generative agents match human self-replication accuracy. Persona simulations reproduce most experimental effects. AI personas cut proto-persona creation from days to minutes.

Layer 2 — The distortion: persona assignment induces cognitive biases that debiasing can't fix. Models default to a single personality type (ENFJ "teacher") and resist deviation. Persona consistency doesn't improve with model capability — Claude 3.5 Sonnet is barely better than GPT 3.5.

Layer 3 — The resolution: what works (detailed interviews, expert reflection, rich content) vs what fails (attribute lists, demographic prompts, ad hoc generation). The difference is content richness, not model sophistication.

Key threads to weave:

The takeaway: The persona paradox reveals something about LLMs that matters beyond persona design: they are powerful mimics whose imitation accuracy masks systematic distortion. The better they simulate, the more dangerous the assumption that simulation equals understanding.


Source: Personas Personality

Related concepts in this collection

Concept map
12 direct connections · 71 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the persona paradox — LLMs that can simulate anyone end up being no one