Do large language models make the same causal reasoning mistakes as humans?
Research on collider structures reveals whether LLMs share human biases in causal inference. This matters because if both fail identically, collaboration might reinforce rather than correct errors.
The collider structure C1 → E ← C2 (two independent causes with a shared effect) is a diagnostic test for normative causal reasoning. When you observe the effect E, observing one cause should lower your estimate of the other (explaining away). When E is absent, C1 and C2 should remain independent.
Humans systematically fail this test in characteristic ways:
- Weak explaining away: explaining away is present but weaker than normatively warranted
- Markov violations: treating supposedly independent causes as correlated even when no collider observation should create that correlation (a "rich-get-richer" associative bias)
The "Do LLMs Reason Causally Like Us?" paper (CLADDER dataset) finds that LLMs exhibit the same two biases in the same direction as humans. This is not the usual finding of LLM inferiority — it is a finding of human-like systematic error. LLMs are not categorically worse at causal reasoning; they err in the same direction.
This matters for several reasons. First, it undermines clean human-vs-LLM comparisons in causal reasoning tasks: if both fail in the same way, the relevant comparison shifts from "who is better" to "are the failure modes compatible." Second, it raises the question of mechanism: humans likely err due to the associative nature of pattern-matching; LLMs likely err for structurally related reasons (training on human text that exhibits the same biases). The shared error direction is evidence that Why do LLMs handle causal reasoning better than temporal reasoning? — the training data itself has these biases baked in.
Third, the finding has implications for high-stakes causal reasoning: medical diagnosis (collider structures appear in disease-symptom networks), legal reasoning (independent causes with shared outcomes), and policy analysis all involve collider-type structures. Human and LLM collaborators sharing the same biases may reinforce rather than correct each other's errors.
Source: Reasoning Methods CoT ToT
Related concepts in this collection
-
Why do LLMs handle causal reasoning better than temporal reasoning?
Exploring whether language models perform asymmetrically on different discourse relations and what training data patterns might explain the gap between causal and temporal reasoning abilities.
the training-data explanation for why LLMs inherit human causal biases; the collider finding is a specific manifestation
-
Do LLMs generalize moral reasoning by meaning or surface form?
When moral scenarios are reworded to reverse their meaning while keeping similar language, do LLMs recognize the semantic shift? This tests whether LLMs actually understand moral concepts or reproduce training distribution patterns.
parallel insight: LLM errors track surface statistical regularities in training data, not normative structure
-
Do foundation models learn world models or task-specific shortcuts?
When transformer models predict sequences accurately, are they building genuine world models that capture underlying physics and logic? Or are they exploiting narrow patterns that fail under distribution shift?
collider bias is one instance: surface associative patterns override normative causal structure
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
llms exhibit human-like causal biases — weak explaining away and markov violations in collider networks