Performative Thinking? The Brittle Correlation Between CoT Length and Problem Complexity
Intermediate token generation (ITG), where a model produces output before the solution, has been proposed as a method to improve the performance of language models on reasoning tasks. While these reasoning traces or Chain of Thoughts (CoTs) are correlated with performance gains, the mechanisms underlying them remain unclear. A prevailing assumption in the community has been to anthropomorphize these tokens as “thinking”, treating longer traces as evidence of higher problemadaptive computation. In this work, we critically examine whether intermediate token sequence length reflects or correlates with problem difficulty. To do so, we train transformer models from scratch on derivational traces of the A* search algorithm, where the number of operations required to solve a maze problem provides a precise and verifiable measure of problem complexity. We first evaluate the models on trivial free-space problems, finding that even for the simplest tasks, they often produce excessively long reasoning traces and sometimes fail to generate a solution. We then systematically evaluate the model on out-of-distribution problems and find that the intermediate token length and ground truth A* trace length only loosely correlate. We notice that the few cases where correlation appears are those where the problems are closer to the training distribution, suggesting that the effect arises from approximate recall rather than genuine problem-adaptive computation. This suggests that the inherent computational complexity of the problem instance is not a significant factor, but rather its distributional distance from the training data. These results challenge the assumption that intermediate trace generation is adaptive to problem difficulty and caution against interpreting longer sequences in systems like R1 as automatically indicative of “thinking effort”.
Following previous work that elucidated important functional aspects of large-scale models through controlled small-scale experiments [29, 23, 39] and working within a sort of “model organism” paradigm, we focus on fully controlled, open, and replicable models trained from scratch. Specifically, we train transformer models from scratch on derivational traces of the A* search algorithm. This approach offers two advantages: (1) the reasoning procedure is explicitly defined and verifiable, and (2) the complexity of input problems can be precisely manipulated.
Within this framework, we evaluate whether intermediate token length in transformer-based models really reflects problem difficulty. We first evaluate the model on the simplest path-finding tasks: free-space problems without obstacles. Solving these should, in principle, require minimal computation, yet we find that the model often performs excessively long computations, and in many cases even fails to produce a solution altogether, thus calling into question the interpretation that the length of intermediate tokens correlates with problem complexity. We then evaluate across diverse maze-generation algorithms. On distributions close to the training distribution, intermediate token lengths show some alignment with problem difficulty. However, this correlation vanishes on more structurally distinct distributions, where trace length and problem complexity become entirely decoupled. These results suggest that the apparent adaptivity of intermediate token length is not evidence of problem-sensitive computation. Instead, it may largely reflect the approximate recall from the training distribution, rather than any genuine alignment between computation length and problem difficulty. Our findings thus suggest that the commonly assumed link between “thinking time” (as measured by reasoning trace length) and task difficulty is misleading.