Psychology and Social Cognition Conversational AI Systems

Can LLMs predict character choices from narrative context?

Explores whether language models can predict fictional character decisions when given rich personality profiles and retrieved narrative memories. This tests whether LLMs can model complex human motivation grounded in literary analysis.

Note · 2026-04-18 · sourced from Personas Personality
How accurately can language models simulate human personalities? Why do AI conversations reliably break down after multiple turns?

Can LLMs predict how fictional characters will act at pivotal moments? The Character is Destiny paper (2024) constructs LIFECHOICE — a benchmark of 1,462 decision points from 388 novels, leveraging expert character analyses from literary scholars. The task: given the preceding narrative, predict which choice the character makes.

The architecture decomposes into two components. First, a character profile combining a static description (personality, experiences, values) with retrieved memories — specific passages from the preceding text. Second, a reasoning step using the profile to answer the decision question.

Three methods for constructing descriptions reveal a hierarchy: expert-written descriptions (from Supersummary) outperform both hierarchical merging (summarize chunks, merge summaries iteratively) and incremental updating (summarize sequentially, refine). This suggests that literary expertise captures something about character motivation — the relationship between personality, values, and action — that automated summarization misses.

The CHARMAP method adds persona-based memory retrieval: selecting narrative passages relevant to the character's psychological profile rather than just the decision context. This yields a 5.03% accuracy gain, indicating that who the character is determines which memories matter for predicting what they will do.

Character-driven motivations decompose into: personality and traits, emotions and psychological state, social relationships, values and beliefs, and desires and goals. This taxonomy suggests that persona simulation for decision-making requires richer internal models than the demographic + preference approaches used in most LLM persona work.

The connection to Why don't LLM role-playing agents act on their stated beliefs? is instructive: when beliefs are extracted from rich narrative context rather than assigned through brief prompts, behavioral prediction improves.

Original note title

persona-driven memory retrieval from narrative text enables LLMs to predict character decisions — expert-written descriptions plus embedding retrieval outperform hierarchical summarization