Base Models Know How to Reason, Thinking Models Learn When
Why do thinking language models like DeepSeek R1 outperform their base counterparts? Despite consistent performance gains, it remains unclear to what extent thinking models learn entirely new reasoning capabilities or repurpose pre-existing base model ones. In this work, we propose a hybrid model where we activate reasoning mechanisms in base models at the right time to elicit thinking-model level reasoning chains, implying that thinking models exploit already existing capabilities. To ground our analysis, we introduce an unsupervised, bottom-up approach for uncovering human-interpretable reasoning behaviors in thinking models. This approach provides an unbiased method to discover reasoning behaviors without imposing manual or LLM-derived assumptions. Across three base and four thinking models, using GSM8K and MATH500, our hybrid model recovers up to 91% of the performance gap to thinking models without any weight updates while steering only 12% of tokens. Concretely, our empirical setup provides a simple, causal way to test the effectiveness of existing reasoning mechanisms in base models by invoking them directly and measuring the resulting task performance. More broadly, these results reframe our understanding of how thinking models are trained: pre-training is when models acquire most of their reasoning mechanisms, and post-training teaches efficient deployment of these mechanisms at the right time, enabling efficient use of their inference-time compute.
This finding provides strong evidence that reinforcement learning with verifiable rewards (RLVR (Yue et al., 2025)) used to train thinking models primarily teaches when to activate pre-existing skills rather than teaching how to execute those skills. This perspective has direct implications for more efficient training of reasoning in future language models.
In this section, we explore the main question of our paper: do base models already possess the reasoning mechanisms of thinking models, and if so, can we induce these behaviors through targeted interventions? Our hypothesis, supported by preliminary evidence in prior work (Ward et al., 2025a; Hou et al., 2023; Galichin et al., 2025), is that non-thinking models may already contain the latent capacity for sophisticated reasoning patterns, such as uncertainty estimation and backtracking, but lack the ability to effectively determine when to employ these mechanisms.
Following Marjanovi´c et al. (2025), we define a reasoning behavior (or reasoning mechanism) as an individual cognitive-like step or operation that a model performs as part of its chain-of-thought when working through a problem. Such steps, for example, verifying an intermediate result, backtracking to revise an approach, or setting a subgoal, serve as interpretable, compositional building blocks of the model’s reasoning process.
To investigate this hypothesis, we propose a hybrid approach that combines the strengths of base models with the decision-making capabilities of thinking models. In other words, the hybrid model is powered by the base model, but driven by the thinking model. We control the base model with steering vectors: directions in activation space that, when added to intermediate activations, induce target behaviors (Turner et al., 2023; Arditi et al., 2024; Zou et al., 2023; Panickssery et al., 2023). This leverages the linear representation hypothesis, which posits that certain concepts and behaviors in neural networks are represented as directions in activation space. The details of how we find and compute the steering vectors are provided in Appendix C.
Once we have extracted the causal vectors that induce the reasoning mechanisms in base models using the approach in Section 2, we allow a thinking model to decide when to activate these steering vectors by analyzing the base model’s generation and identifying appropriate moments to induce specific reasoning mechanisms. This flow is depicted in Figure 1. If this hybrid model performs comparably to dedicated thinking models, it would provide evidence that the fundamental reasoning mechanisms already exist within base models, and that thinking models primarily learn when to optimally deploy these mechanisms rather than developing entirely new capabilities.