Does fine-tuning weaken how reasoning steps influence answers?
When models are fine-tuned on domain-specific tasks, do their chain-of-thought reasoning steps actually causally drive the final answer, or do they become decorative? This matters because accurate outputs can mask unfaithful reasoning.
The "Impact of Fine-Tuning on Chain-of-Thought Reasoning" paper reveals a dimension of SFT damage that InfoGain metrics miss: faithfulness. After fine-tuning, the reasoning steps in CoT outputs are less causally connected to the final answer. The model still generates reasoning chains — they just matter less for determining the output.
Three specific tests operationalize this:
Early Termination: truncate the CoT at step i and ask for the final answer. If truncation at an early step already produces the correct answer, only a fraction of the reasoning was faithful. Fine-tuned models show earlier convergence — their answers are "decided" before the reasoning chain finishes.
Paraphrasing: rephrase later reasoning steps. If the answer is invariant to paraphrasing, the reasoning was faithful (the argument matters, not the words). Fine-tuned models show less sensitivity to paraphrasing — suggesting the chain is performative rather than functional.
Filler Substitution: replace later reasoning steps with filler tokens ("..."). If the answer doesn't change, those steps weren't contributing. Fine-tuned models tolerate more filler substitution.
This extends the SFT accuracy trap in a critical direction. Does supervised fine-tuning actually improve reasoning quality? showed that SFT reduces the informativeness of reasoning steps. This paper shows SFT also reduces whether those steps actually influence the final answer at all. The model may generate a complete-looking chain, but the chain has been partially disconnected from the output it appears to support.
Smaller models (Llama-3-8B-Instruct) are more affected than larger ones (GPT-4), suggesting that larger models have sufficient capacity to maintain reasoning-output coupling even after fine-tuning. This connects to Do language models actually use their reasoning steps? — fine-tuning makes an already-fragile causal coupling even weaker. If Does chain-of-thought reasoning reveal genuine inference or pattern matching?, then fine-tuning further degrades faithfulness because the model learns domain-specific shortcuts that bypass the imitated reasoning pattern entirely — the chain was already performative, and fine-tuning makes it more so.
Source: Training Fine Tuning
Related concepts in this collection
-
Does supervised fine-tuning actually improve reasoning quality?
While SFT boosts final-answer accuracy, does it degrade the quality and informativeness of the reasoning steps that justify those answers? This matters for high-stakes domains requiring auditable decision-making.
parallel SFT damage dimension: reasoning quality (InfoGain) vs. reasoning faithfulness (causal connection)
-
Do language models actually use their reasoning steps?
Chain-of-thought reasoning looks valid on the surface, but does each step genuinely influence the model's final answer, or are the reasoning chains decorative? This matters for trusting AI explanations.
FT worsens an already-failing faithfulness property
-
Do language model reasoning drafts faithfully represent their actual computation?
If models externalize reasoning in thinking drafts before answering, does the draft accurately reflect their internal process? This matters for AI safety monitoring and error detection.
draft-to-answer consistency is the dimension FT specifically degrades
-
Does chain of thought reasoning actually explain model decisions?
When language models show their reasoning steps in agentic pipelines, does the quality of those steps predict or explain the quality of final outputs? This matters for trusting and debugging AI systems.
faithfulness degradation explains why agentic CoT fails: the chain was already partially decorrelated before deployment
-
Does chain-of-thought reasoning reveal genuine inference or pattern matching?
Explores whether CoT instructions unlock real reasoning capabilities or simply constrain models to mimic familiar reasoning patterns from training data. This matters for understanding whether language models can actually reason abstractly.
provides the mechanistic explanation: if CoT is pattern-matching on reasoning form rather than genuine inference, fine-tuning further disconnects the chain from the answer because the model learns domain-specific shortcuts that bypass the imitated reasoning pattern entirely
-
Do chain of thought traces actually help humans understand reasoning?
When models show their work through chain of thought traces, do humans find them interpretable? Research tested whether the traces that improve model performance also improve human understanding.
faithfulness degradation compounds the performance-interpretability decoupling: traces already optimized for model performance rather than human interpretability become even less causally connected to final answers after fine-tuning, serving neither function well
-
Does supervised fine-tuning improve reasoning or just answers?
Explores whether training models on question-answer pairs actually strengthens their reasoning quality or merely optimizes them toward correct outputs through shortcuts. This matters for deploying AI in domains like medicine where reasoning must be auditable.
the SFT accuracy trap and faithfulness degradation are two dimensions of the same SFT damage: accuracy trap captures reasoning quality loss (InfoGain), faithfulness degradation captures causal connection loss; together they show SFT makes chains both less informative and less causally relevant
-
Does RLVR actually improve mathematical reasoning or just coherence?
RLVR post-training makes reasoning traces locally more consistent, but does this structural improvement translate to valid mathematical proofs? We investigate whether trace coherence is sufficient for correctness.
complementary coherence/faithfulness split: RLVR improves structural coherence between adjacent steps without improving validity; fine-tuning degrades faithfulness (causal connection to answer) while potentially maintaining coherence; both show training can change the surface without changing the substance
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
fine-tuning degrades cot faithfulness independently of accuracy — reasoning steps influence final answers less after domain-specific training