Psychology and Social Cognition

Can open language models adopt different personalities through prompting?

Explores whether open LLMs can be conditioned to mimic target personalities via prompting, or whether they resist and retain their default traits regardless of instructions.

Note · 2026-02-22 · sourced from Personas Personality
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The "Open Models, Closed Minds" study tested whether open LLMs can mimic human personalities when conditioned through prompting. The finding: most cannot. When given personality-conditioning prompts, the majority of models retain their intrinsic traits — the ENFJ-like default — rather than shifting to the target personality. The authors call this being "closed-minded."

Only a few models (SOLAR, NeuralChat, Llama3-8, Dolphin) demonstrate genuine flexibility, successfully mirroring imposed personalities regardless of temperature setting. The rest are stubborn.

A partial solution emerges: combining role conditioning (e.g., "you are a dentist") with personality conditioning (e.g., "you are introverted and analytical") produces better results than personality conditioning alone. The ENFJ archetype — trained as a teacher — responds to being given a concrete professional role because roles provide behavioral anchors that abstract personality dimensions don't.

This is a different failure mode from Why do LLM persona prompts produce inconsistent outputs across runs?. That finding shows run-to-run instability — the model's output varies unpredictably under persona prompts. This finding shows resistance — the model's output remains stubbornly stable on its default personality regardless of prompts. Together they form two sides of a persona failure taxonomy:

  1. Instability: model generates varying outputs that reflect uncertainty, not persona knowledge
  2. Resistance: model retains intrinsic personality traits despite conditioning attempts
  3. Motivated reasoning: persona conditioning introduces cognitive biases (see Do personas make language models reason like biased humans?)

The practical implication: persona engineering requires more than prompting. Role-personality combinations work better than personality alone. But even then, model selection matters — most models simply cannot be steered to arbitrary personality configurations through in-context methods.


Source: Personas Personality

Related concepts in this collection

Concept map
14 direct connections · 100 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

most open LLMs are closed-minded to personality conditioning — retaining intrinsic traits despite prompting while combining role and personality conditioning partially overcomes resistance