Can language models learn to model human decision making?
Explores whether LLMs finetuned on psychological experiments can capture how people actually make decisions better than theories designed specifically for that purpose.
The claim is surprisingly strong: large language models, after finetuning on data from psychological experiments, produce more accurate representations of human behavior than traditional cognitive models in two well-studied decision-making domains — decisions from descriptions (choosing between gambles with known probabilities) and decisions from experience (learning probabilities through repeated interaction).
Three findings build the case. First, finetuned LLMs describe human behavior better than traditional cognitive models, verified through extensive model simulations confirming human-like behavioral characteristics. Second, embeddings from these finetuned models contain information necessary to capture individual differences — not just population-level averages but subject-level behavioral variation. Third, a model finetuned on two tasks predicts human behavior on a third, hold-out task — genuine cross-task transfer of cognitive modeling capability.
This is not just another "LLMs replicate human patterns" finding. Traditional cognitive models are theory-driven: they embed specific assumptions about how humans process information (prospect theory for gambles, reinforcement learning for experience-based decisions). The LLM approach is theory-agnostic — it captures behavioral regularities without specifying the mechanism. That it outperforms the theory-driven models suggests either that the theories are incomplete, or that LLMs are capturing interaction effects between cognitive mechanisms that modular theories miss.
The individual-differences finding is particularly notable because it connects to Can AI agents learn people better from interviews than surveys?. That work shows LLMs can simulate specific individuals; this work shows LLMs can model individual-level cognitive processes. Together they suggest LLM representations encode not just what people say but how people think — at least for domains well-represented in training data.
Two complementary findings extend this. First, since Can language summaries unlock hidden psychological patterns?, LLMs can predict responses on 9 psychological scales from only 20 Big Five items — with R² > 0.89 structural alignment to human data. The natural language summary serves as an intermediate representation that captures "emergent, second-order information — a conceptual gestalt" beyond what raw scores contain. Second, since Can we control personality in language models without prompting?, PsychAdapter demonstrates that psychological trait knowledge is already structurally present in pre-trained weights — fine-grained personality control requires only activating latent patterns, not teaching new ones. Together with the finetuned cognitive models documented here, these findings converge on a strong claim: LLMs encode human psychological structure at multiple levels — population-level cognitive processes (this note), cross-scale trait relationships (zero-shot profiling), and latent trait representations in weights (PsychAdapter).
The cross-task transfer challenges the view that LLMs are narrow pattern matchers. If finetuning on gamble decisions and experience-based learning transfers to a new task, the model is learning something about human cognition in general, not just memorizing task-specific response patterns. However, the scope remains constrained — both domains involve numerical decision-making, and transfer to qualitatively different cognitive tasks (e.g., language processing, spatial reasoning) is untested.
Source: Cognitive Models Latent; enriched from Psychology Therapy Practice
Related concepts in this collection
-
Can AI agents learn people better from interviews than surveys?
Can rich interview transcripts seed more accurate generative agents than demographic data or survey responses? This matters because it challenges how we build digital simulations of real people.
individual-level behavioral replication from a different angle (social simulation vs cognitive modeling)
-
How do we generate realistic personas at population scale?
Current LLM-based persona generation relies on ad hoc methods that fail to capture real-world population distributions. The challenge is reconstructing the joint correlations between demographic, psychographic, and behavioral attributes from fragmented data.
population-level simulation shares the calibration challenge
-
How well do AI personas replicate real experimental findings?
Can language models simulating human personas accurately reproduce the results of published psychology and marketing experiments? Understanding this matters for validating whether AI can substitute for human subjects in research.
convergent finding: LLMs capture strong effects, struggle with subtle ones
-
Can language summaries unlock hidden psychological patterns?
Do natural language compressions of personality scores capture information beyond the raw numbers themselves? This explores whether linguistic abstraction reveals emergent trait patterns that numerical data alone cannot.
zero-shot cross-scale inference with R² > 0.89; linguistic compression as mechanism
-
Can we control personality in language models without prompting?
Can lightweight adapter modules enable continuous, fine-grained control over psychological traits in transformer outputs independent of prompt engineering? This explores whether architecture-level personality modification outperforms prompt-based approaches.
psychological traits encoded in pre-trained weights; activation without retraining
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llms finetuned on psychological experiment data become generalist cognitive models that outperform traditional cognitive models and capture individual differences