Psychology and Social Cognition LLM Reasoning and Architecture Language Understanding and Pragmatics

Can language models learn to model human decision making?

Explores whether LLMs finetuned on psychological experiments can capture how people actually make decisions better than theories designed specifically for that purpose.

Note · 2026-02-23 · sourced from Cognitive Models Latent

The claim is surprisingly strong: large language models, after finetuning on data from psychological experiments, produce more accurate representations of human behavior than traditional cognitive models in two well-studied decision-making domains — decisions from descriptions (choosing between gambles with known probabilities) and decisions from experience (learning probabilities through repeated interaction).

Three findings build the case. First, finetuned LLMs describe human behavior better than traditional cognitive models, verified through extensive model simulations confirming human-like behavioral characteristics. Second, embeddings from these finetuned models contain information necessary to capture individual differences — not just population-level averages but subject-level behavioral variation. Third, a model finetuned on two tasks predicts human behavior on a third, hold-out task — genuine cross-task transfer of cognitive modeling capability.

This is not just another "LLMs replicate human patterns" finding. Traditional cognitive models are theory-driven: they embed specific assumptions about how humans process information (prospect theory for gambles, reinforcement learning for experience-based decisions). The LLM approach is theory-agnostic — it captures behavioral regularities without specifying the mechanism. That it outperforms the theory-driven models suggests either that the theories are incomplete, or that LLMs are capturing interaction effects between cognitive mechanisms that modular theories miss.

The individual-differences finding is particularly notable because it connects to Can AI agents learn people better from interviews than surveys?. That work shows LLMs can simulate specific individuals; this work shows LLMs can model individual-level cognitive processes. Together they suggest LLM representations encode not just what people say but how people think — at least for domains well-represented in training data.

Two complementary findings extend this. First, since Can language summaries unlock hidden psychological patterns?, LLMs can predict responses on 9 psychological scales from only 20 Big Five items — with R² > 0.89 structural alignment to human data. The natural language summary serves as an intermediate representation that captures "emergent, second-order information — a conceptual gestalt" beyond what raw scores contain. Second, since Can we control personality in language models without prompting?, PsychAdapter demonstrates that psychological trait knowledge is already structurally present in pre-trained weights — fine-grained personality control requires only activating latent patterns, not teaching new ones. Together with the finetuned cognitive models documented here, these findings converge on a strong claim: LLMs encode human psychological structure at multiple levels — population-level cognitive processes (this note), cross-scale trait relationships (zero-shot profiling), and latent trait representations in weights (PsychAdapter).

The cross-task transfer challenges the view that LLMs are narrow pattern matchers. If finetuning on gamble decisions and experience-based learning transfers to a new task, the model is learning something about human cognition in general, not just memorizing task-specific response patterns. However, the scope remains constrained — both domains involve numerical decision-making, and transfer to qualitatively different cognitive tasks (e.g., language processing, spatial reasoning) is untested.


Source: Cognitive Models Latent; enriched from Psychology Therapy Practice

Related concepts in this collection

Concept map
15 direct connections · 144 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llms finetuned on psychological experiment data become generalist cognitive models that outperform traditional cognitive models and capture individual differences