Psychology and Social Cognition Language Understanding and Pragmatics

Can we extract causal belief networks from interview conversations?

Can natural language interviews be systematically parsed into causal graphs that capture how individuals reason about policy trade-offs? This matters for building auditable belief simulations that go beyond static opinion snapshots.

Note · 2026-05-03 · sourced from World Models

Building generative agents that simulate human reasoning rather than merely produce plausible stances — a goal motivated by the Can language models simulate belief change in people? critique — requires modeling individual internal logic. The proposed pipeline is a three-step process from natural language to executable causal belief networks (CBNs).

Step 1: Extract causal motifs from QA responses. LLM-conducted semi-structured interviews adaptively elicit causal explanations in everyday language ("why do you support X?" "what does Y influence?"). Responses are annotated with concept nodes and directional relations. For example: Q: "How might surveillance affect public safety?" A: "It can reduce crime by aiding investigations with more transparency, which increases public safety." Motif: Transparency → Crime rate → Public safety. A second QA might add: Privacy ← Transparency → Crime rate.

Step 2: Compose a Causal Belief Network. The motifs are compiled into a belief graph representing the participant's reasoning. Nodes are concepts (fairness, safety, family needs); edges are directional causal relations with confidence and polarity scores derived from motif density or respondent emphasis.

Step 3: Simulate belief change via intervention. Apply a hypothetical intervention such as do(Transparency = high), reflecting a policy shift like increased camera accountability. Use belief propagation over the CBN to update downstream posteriors. Example: P(Privacy Concern) shifts from 0.7 to 0.3, and P(Opposition to Surveillance) shifts from 0.7 to 0.2.

The chain demonstrates that motif-based causal modeling can simulate how individuals update beliefs in response to policy changes, moving beyond static opinion snapshots. But the paper acknowledges open challenges: extracting CBNs from natural language remains hard due to ambiguity in concept identification, causal direction, polarity, and conceptual granularity. And causality alone cannot capture the full range of human reasoning — people also rely on associative, analogical, and emotional processes that resist strict symbolic modeling. The initial focus on causality is described as a strategic and computationally tractable starting point, not an endpoint.

The pipeline's value is that it makes belief simulation auditable. Unlike a prompted persona whose reasoning is inscrutable, a CBN exposes the structure of belief and supports formal analyses (intervention, counterfactuals, sensitivity to specific edges). Policy simulation requires this auditability because stakeholders must be able to ask why a simulated agent reached a stance — an auditability requirement Can AI agents learn people better from interviews than surveys? approaches with content richness but not with structural transparency.


Source: World Models

Related concepts in this collection

Concept map
14 direct connections · 144 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

causal belief networks extracted from semi-structured interviews enable simulating belief change under hypothetical interventions — a concrete pipeline from natural language to do-calculus