Planted in Pretraining, Swayed by Finetuning: A Case Study on the Origins of Cognitive Biases in LLMs
Large language models (LLMs) exhibit cognitive biases – systematic tendencies of irrational decision-making, similar to those seen in humans. Prior work has found that these biases vary across models and can be amplified by instruction tuning. However, it remains unclear if these differences in biases stem from pretraining, finetuning, or even random noise due to training stochasticity. We propose a two-step causal experimental approach to disentangle these factors. First, we finetune models multiple times using different random seeds to study how training randomness affects over 30 cognitive biases. Second, we introduce cross-tuning – swapping instruction datasets between models to isolate bias sources. This swap uses datasets that led to different bias patterns, directly testing whether biases are dataset-dependent. Our findings reveal that while training randomness introduces some variability, biases are mainly shaped by pretraining: models with the same pretrained backbone exhibit more similar bias patterns than those sharing only finetuning data. These insights suggest that understanding biases in finetuned models requires considering their pretraining origins beyond finetuning effects.
findings that pretrained models already exhibit most of their eventual capabilities, with finetuning primarily enhancing them (Antoniades et al., 2024; Zhou et al., 2024). Other studies on cognitive biases focus on instruction-tuned models (Alsagheer et al., 2024; Shaikh et al., 2024), and recent work shows that these models often exhibit stronger biases not seen in their pretrained counterparts, implicating instruction tuning as a cause of behavioral biases (Itzhak et al., 2024). Complicating this analysis, training is inherently stochastic - minor variations such as random seed differences, can lead to subtle behavioral shifts (Hayou et al., 2025), making it difficult to isolate the true source of these biases.