Does chain of thought reasoning actually explain model decisions?
When language models show their reasoning steps in agentic pipelines, does the quality of those steps predict or explain the quality of final outputs? This matters for trusting and debugging AI systems.
The explainability promise of CoT is: by showing intermediate reasoning steps, we make the model's decision-making process transparent and understandable. The "Thoughts without Thinking" paper tests this promise in an agentic pipeline implementing a perceptive task guidance system and finds it fails in practice.
The empirical result: reviewer scores for CoT thoughts are weakly correlated with reviewer scores for responses. Incorrect responses can be preceded by apparently plausible-looking chains; incorrect chains don't reliably predict or explain incorrect responses. The chain is not doing the causal work we assume it is.
Two failure modes identified through qualitative content analysis:
The Einstellung effect: CoT rapidly gravitates toward tokens most commonly associated with a concept in training data, even when those tokens contradict the task requirements. In the dump truck assembly example: the chain starts reasoning about the toy but quickly pivots to "clutch," "transmission," "gears" — language far more common for real dump trucks than for toy assembly instructions. The chain explains what went wrong only in retrospect and only with considerable analytical effort.
Context window pressure: When context fills, the foundation model's parametric knowledge overrides RAG-retrieved context. The chain reflects this substitution but doesn't flag it as a failure.
The deeper problem: CoT produces explanations without explainability. There is more material to analyze (the chain), but that material requires considerably more interpretive effort than a single output, and may actively mislead by appearing coherent. "Generating more material" ≠ "making the system more understandable."
This extends Do language models actually use their reasoning steps? from single-model settings to agentic pipelines, where the weak correlation has direct consequences for users trying to debug or trust systems. It also connects to Do reasoning traces actually cause correct answers? — the human-like appearance of chains generates misplaced trust.
Source: Reasoning Architectures
Related concepts in this collection
-
Do language models actually use their reasoning steps?
Chain-of-thought reasoning looks valid on the surface, but does each step genuinely influence the model's final answer, or are the reasoning chains decorative? This matters for trusting AI explanations.
extends: faithfulness failure is measurable in production agentic systems, not just theoretically possible
-
Do reasoning traces actually cause correct answers?
Explores whether the intermediate 'thinking' tokens in R1-style models genuinely drive reasoning or merely mimic its appearance. Matters because false confidence in invalid traces could mask errors.
agentic CoT multiplies the safety risk by adding inter-LLM trace generation
-
Do chain of thought traces actually help humans understand reasoning?
When models show their work through chain of thought traces, do humans find them interpretable? Research tested whether the traces that improve model performance also improve human understanding.
direct support: performance ≠ interpretability; agentic pipelines make this concrete
-
Do language models actually use their encoded knowledge?
Probes can detect that LMs encode facts internally, but do those encoded facts causally influence what the model generates? This explores the gap between knowing and doing.
same pattern: representation (or trace) exists but doesn't causally determine output
-
Can model explanations help humans predict what models actually do?
Do explanations that sound plausible to humans actually help them forecast model behavior on new cases? Understanding this gap matters because RLHF optimizes for plausible explanations, not predictive ones.
the metric-level confirmation: weak thought-response correlation in agentic pipelines (this note) is the production-system manifestation of low counterfactual simulatability; users cannot predict model behavior from the explanations in either setting
-
Can formal argumentation make AI decisions truly contestable?
Explores whether structuring AI decisions as formal argument graphs (with explicit attacks and defenses) enables users to meaningfully challenge and navigate reasoning in ways unstructured LLM outputs cannot.
a potential architectural remedy: formal argumentation forces reasoning into a traversable graph of attack/defense relations, making justification structure genuinely inspectable rather than producing CoT chains that appear coherent but lack causal connection to the output
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
cot reasoning in agentic pipelines produces explanations without explainability because thought quality is weakly correlated with response quality