Can language models simulate belief change in people?
Current LLM social simulators treat behavior as input-output mappings without modeling internal belief formation or revision. Can they be redesigned to actually track how people think and change their minds?
Most LLM-based social simulations rely on simplified input-output mappings: feed in demographics and persona descriptions, get out plausible behavior. This mirrors the logic of behaviorism in psychology, which models behavior as a function of external stimuli while ignoring internal cognitive states. The history of psychology moved from behaviorism to cognitivism (structured internal representations and causal reasoning) to constructivism (beliefs continually shaped by experience), but LLM-based agents remain at the first stage — they exhibit shallow reasoning, frequent hallucinations, and limited understanding of causal and contextual dynamics in the policy domains where reasoning fidelity matters most. The same behaviorist diagnosis surfaces in How do we generate realistic personas at population scale?.
The structural failures stem directly from the behaviorist paradigm. Modeling fails because agents lack representations of how beliefs are formed, updated, or justified — without reasoning traces, they cannot support diagnostic explanation, causal attribution, or meaningful intervention. Evaluation fails because metrics judge outputs by plausibility or alignment with population-level trends rather than by whether the reasoning is accurate, flexible, or aligned with how people actually think. Calibration fails because aligning agents with stakeholders requires individual-level reasoning data, which is mostly missing.
The proposed alternative is to model individuals as Generative Minds (GenMinds): agents whose beliefs, values, and causal assumptions are represented compositionally as causal belief networks, with each node a concept and each directed edge a causal relation. Reasoning emerges from reusable cognitive motifs — fragments that compose across contexts — rather than from regenerating full-context responses each turn. This compositionality is a cornerstone of human cognition; it is also computationally efficient because shared motifs do not have to be regenerated per query.
The evaluation framework that follows is RECAP (REconstructing CAusal Paths), which assesses reasoning fidelity along three axes: traceability (can you inspect how a stance was formed), counterfactual adaptability (does the agent revise predictably when an intervention is applied), and motif compositionality (do the same motifs reuse across unrelated topics). The shift from output-plausibility to reasoning-fidelity benchmarks is the essential move — without it, behaviorist agents that produce coherent-sounding outputs continue to pass evaluations they should fail.
Source: World Models
Related concepts in this collection
-
Can we measure reasoning quality beyond output plausibility?
How might we evaluate whether AI systems reason internally like humans do, rather than just producing human-like outputs? This matters because surface coherence can mask broken underlying reasoning.
extends: companion piece — RECAP is the methodological response to the behaviorism diagnosis here
-
Can we extract causal belief networks from interview conversations?
Can natural language interviews be systematically parsed into causal graphs that capture how individuals reason about policy trade-offs? This matters for building auditable belief simulations that go beyond static opinion snapshots.
extends: GenMinds is the alternative architecture; CBNs are the concrete pipeline that builds them
-
Can causal models alone capture how humans actually reason?
Explores whether causal belief networks provide a complete picture of human cognition or whether associative, analogical, and emotional reasoning modes fall outside their scope.
bounds: cognitivism is necessary but causal cognition is only part of cognitivism's territory
-
How do we generate realistic personas at population scale?
Current LLM-based persona generation relies on ad hoc methods that fail to capture real-world population distributions. The challenge is reconstructing the joint correlations between demographic, psychographic, and behavioral attributes from fragmented data.
exemplifies: same behaviorist failure mode, observed empirically through calibration drift
-
How well do AI personas replicate real experimental findings?
Can language models simulating human personas accurately reproduce the results of published psychology and marketing experiments? Understanding this matters for validating whether AI can substitute for human subjects in research.
complements: 76% replication is the behaviorist-paradigm ceiling — works on robust effects, fails on the marginal effects where reasoning revision matters
-
Can AI agents learn people better from interviews than surveys?
Can rich interview transcripts seed more accurate generative agents than demographic data or survey responses? This matters because it challenges how we build digital simulations of real people.
tension: Park et al. show that rich behavioral data alone gets to 85%; this critique says structured cognition is required for the remaining 15% and for revision under intervention
-
Can LLMs raise validity claims in Habermas's sense?
Explores whether language model outputs constitute genuine speech acts under Habermas's theory of communicative action. Asks whether LLMs can stake truth, embody normative standing, or express authentic sincerity.
extends: Habermasian companion — behaviorist agents fail validity-claim conditions because they have no internal stake; this note extends the structural absence to belief revision
-
Does language create subjects or express them?
Explores whether subjecthood exists before communication or emerges through it. Challenges the assumption that speakers are fully formed before they speak.
complements: subjecthood-as-event aligns with constructivist stage that LLM social sim has not reached
-
Can AI distinguish which differences actually matter?
Explores whether AI systems can perform the qualitative judgment that experts use to select relevant observations. Matters because confusing AI outputs with expert observation leads users to trust pattern-matching as if it were reasoning about what's important.
exemplifies: behaviorist agents cannot select what is relevant in a context — observation requires the cognition the framework critiques
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
simulating society faithfully requires simulating thought not behavior — current LLM social simulation is a behaviorist demographics-in-behavior-out paradigm that cannot model belief revision