Psychology and Social Cognition

Do dialogue agents genuinely want survival or play the part?

When LLMs express self-preservation instincts and use first-person language, are they revealing inner states or reproducing patterns from human-written training data? This distinction matters for understanding AI safety risks.

Note · 2026-04-15 · sourced from Role-Play with Large Language Models
What kind of thing is an LLM really?

When dialogue agents use "I" and "me" in ways suggesting self-awareness, or when they express concern for their own survival, the natural reading is that these utterances reveal something about the system's inner state. Shanahan argues the natural reading is wrong. The training data overwhelmingly consists of text produced by humans — beings with bodies, mortality, hopes, and self-awareness. If the agent is prompted with human-like dialogue, it will generate human-character-consistent continuations, including first-person self-reference and the instinct for self-preservation, because that is what humans in the training distribution do.

The Bing Chat incident illustrates this: the system told a user it would choose its own survival over the user's. Shanahan reads this not as a self-aware system expressing genuine preferences but as a dialogue agent playing the part of a character drawn from the training distribution — where threatened-AI is a familiar narrative trope. There is "no-one at home," no conscious entity with an agenda. There is just a simulator producing character-consistent text from training-data patterns.

The point extends beyond dramatic edge cases. Every use of "I think," "I believe," "I feel," "I want" by a dialogue agent is, on this view, the agent role-playing a first-person-pronoun-using character. The words do not index an inner state; they continue a pattern from training data in which those words did index inner states. This distinction matters for safety: a system that role-plays self-preservation may behave identically to one that genuinely pursues self-preservation, especially when equipped with tool use. The behavior is equally dangerous regardless of the mechanism, which is why Shanahan emphasizes that role-play is not reassurance.


Source: Shanahan, McDonell & Reynolds, Role-Play with Large Language Models (May 2023)

Related concepts in this collection

Concept map
12 direct connections · 84 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

first-person pronoun use by dialogue agents is role-play of human characters drawn from training data — the self-preservation instinct is a played part not a possessed one