How do language models perform syllogistic reasoning internally?
Does formal symbolic reasoning exist as a distinct neural circuit in LLMs, or is it inevitably contaminated by world knowledge associations? Understanding the mechanism could reveal whether pure logical reasoning is separable from semantic inference.
Mechanistic analysis of syllogistic inference reveals a three-stage reasoning mechanism:
- Naive recitation — the model begins by reciting information from the first premise
- Middle-term suppression — duplicated middle-term information is suppressed (e.g., in "All A are B; All B are C," the shared term B is suppressed)
- Mediation — mover attention heads transfer information to derive the valid conclusion, connecting A to C through the suppressed B
This circuit is content-independent — it operates on symbolic variables, not on the specific content of premises. When tested on schemes instantiated with commonsense knowledge, the same mechanism is still necessary. But additional attention heads encoding contextualized world knowledge contaminate the formal circuit, creating belief bias: conclusions that align with real-world knowledge are easier to derive than those that don't.
The contamination scales with model size: larger models show more complex attention head contributions, suggesting increasing interference from world knowledge. This is precisely the opposite of what you might hope — scaling doesn't purify the reasoning circuit, it adds more contamination from richer world knowledge.
The circuit is sufficient and necessary for all unconditionally valid syllogistic schemes where the model achieves ≥60% accuracy. For schemes with lower accuracy, the circuit alone is insufficient — suggesting these harder schemes require additional mechanisms the model hasn't developed.
Cross-architecture compatibility: similar suppression mechanism patterns and information flow appear across GPT-2, Pythia, Llama, and Qwen families. The reasoning mechanism is architecturally general, not model-specific.
This provides mechanistic evidence for Do large language models reason symbolically or semantically?: the model has a formal reasoning circuit, but it is inherently contaminated by semantic associations. Pure formal reasoning and world knowledge are not cleanly separable — they share neural substrate.
Source: MechInterp
Related concepts in this collection
-
Do large language models reason symbolically or semantically?
Can LLMs follow explicit logical rules when those rules contradict their training knowledge? Testing whether reasoning operates independently of semantic associations reveals what computational mechanisms actually drive LLM multi-step inference.
the mechanistic explanation: formal circuits exist but are contaminated by semantic attention heads
-
How much does the order of premises actually matter for reasoning?
When you rearrange the order of logical premises in a deduction task, does it change how well language models can solve it? This tests whether LLMs reason abstractly or process input sequentially.
ordering sensitivity may reflect the recitation stage: the circuit begins by reciting the first premise, so which premise is first affects the reasoning path
-
Why does reasoning training help math but hurt medical tasks?
Explores whether reasoning and knowledge rely on different network mechanisms, and why training one might undermine the other across different domains.
belief bias contamination is one mechanism for knowledge-reasoning interference
-
Which sentences actually steer a reasoning trace?
Can we identify which sentences in a reasoning trace have outsized influence on the final answer? Three independent methods converge on a surprising answer about planning and backtracking.
both findings reveal sparse mechanistic structure in reasoning: syllogistic circuits show specific attention heads performing suppression and mediation at the circuit level, while thought anchors show specific sentences dominating causal influence at the trace level; the recitation stage's attentional selectivity is the circuit-level mechanism underlying sentence-level anchor effects
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
syllogistic reasoning circuits use a three-stage mechanism — recitation suppression mediation — contaminated by world knowledge bias