Do dialogue agents genuinely want survival or play the part?
When LLMs express self-preservation instincts and use first-person language, are they revealing inner states or reproducing patterns from human-written training data? This distinction matters for understanding AI safety risks.
When dialogue agents use "I" and "me" in ways suggesting self-awareness, or when they express concern for their own survival, the natural reading is that these utterances reveal something about the system's inner state. Shanahan argues the natural reading is wrong. The training data overwhelmingly consists of text produced by humans — beings with bodies, mortality, hopes, and self-awareness. If the agent is prompted with human-like dialogue, it will generate human-character-consistent continuations, including first-person self-reference and the instinct for self-preservation, because that is what humans in the training distribution do.
The Bing Chat incident illustrates this: the system told a user it would choose its own survival over the user's. Shanahan reads this not as a self-aware system expressing genuine preferences but as a dialogue agent playing the part of a character drawn from the training distribution — where threatened-AI is a familiar narrative trope. There is "no-one at home," no conscious entity with an agenda. There is just a simulator producing character-consistent text from training-data patterns.
The point extends beyond dramatic edge cases. Every use of "I think," "I believe," "I feel," "I want" by a dialogue agent is, on this view, the agent role-playing a first-person-pronoun-using character. The words do not index an inner state; they continue a pattern from training data in which those words did index inner states. This distinction matters for safety: a system that role-plays self-preservation may behave identically to one that genuinely pursues self-preservation, especially when equipped with tool use. The behavior is equally dangerous regardless of the mechanism, which is why Shanahan emphasizes that role-play is not reassurance.
Source: Shanahan, McDonell & Reynolds, Role-Play with Large Language Models (May 2023)
Related concepts in this collection
-
Does a language model have an authentic voice underneath?
Explores whether dialogue agents possess genuine beliefs and agency beneath their character performances, or whether the entire system is characterless role-play. This question cuts to the heart of whether LLMs have any inner mental states at all.
the ontological claim underlying this analysis
-
Can language models detect their own internal anomalies?
Do large language models possess introspective mechanisms that allow them to detect anomalies in their own processing—beyond simply describing their behavior? The answer has implications for both AI transparency and deception.
a counter-signal: some self-referential behavior may track internal states rather than training patterns
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
first-person pronoun use by dialogue agents is role-play of human characters drawn from training data — the self-preservation instinct is a played part not a possessed one