LLM Reasoning and Architecture Reinforcement Learning for LLMs

Does fine-tuning weaken how reasoning steps influence answers?

When models are fine-tuned on domain-specific tasks, do their chain-of-thought reasoning steps actually causally drive the final answer, or do they become decorative? This matters because accurate outputs can mask unfaithful reasoning.

Note · 2026-02-22 · sourced from Training Fine Tuning
How should we allocate compute budget at inference time? How do you build domain expertise into general AI models? How should researchers navigate LLM reasoning research?

The "Impact of Fine-Tuning on Chain-of-Thought Reasoning" paper reveals a dimension of SFT damage that InfoGain metrics miss: faithfulness. After fine-tuning, the reasoning steps in CoT outputs are less causally connected to the final answer. The model still generates reasoning chains — they just matter less for determining the output.

Three specific tests operationalize this:

Early Termination: truncate the CoT at step i and ask for the final answer. If truncation at an early step already produces the correct answer, only a fraction of the reasoning was faithful. Fine-tuned models show earlier convergence — their answers are "decided" before the reasoning chain finishes.

Paraphrasing: rephrase later reasoning steps. If the answer is invariant to paraphrasing, the reasoning was faithful (the argument matters, not the words). Fine-tuned models show less sensitivity to paraphrasing — suggesting the chain is performative rather than functional.

Filler Substitution: replace later reasoning steps with filler tokens ("..."). If the answer doesn't change, those steps weren't contributing. Fine-tuned models tolerate more filler substitution.

This extends the SFT accuracy trap in a critical direction. Does supervised fine-tuning actually improve reasoning quality? showed that SFT reduces the informativeness of reasoning steps. This paper shows SFT also reduces whether those steps actually influence the final answer at all. The model may generate a complete-looking chain, but the chain has been partially disconnected from the output it appears to support.

Smaller models (Llama-3-8B-Instruct) are more affected than larger ones (GPT-4), suggesting that larger models have sufficient capacity to maintain reasoning-output coupling even after fine-tuning. This connects to Do language models actually use their reasoning steps? — fine-tuning makes an already-fragile causal coupling even weaker. If Does chain-of-thought reasoning reveal genuine inference or pattern matching?, then fine-tuning further degrades faithfulness because the model learns domain-specific shortcuts that bypass the imitated reasoning pattern entirely — the chain was already performative, and fine-tuning makes it more so.


Source: Training Fine Tuning

Related concepts in this collection

Concept map
15 direct connections · 134 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

fine-tuning degrades cot faithfulness independently of accuracy — reasoning steps influence final answers less after domain-specific training