Reinforcement Learning for LLMs LLM Reasoning and Architecture

Does RL teach reasoning or teach when to use it?

Post-training RL gets credit for building reasoning into language models, but emerging evidence suggests base models already possess this capability. The question is whether RL creates new reasoning skills or simply teaches deployment timing.

Note · 2026-02-22 · sourced from Reasoning Architectures

Post angle — Medium/LinkedIn

The dominant story: DeepSeek R1, GPT-o1, and their successors acquire reasoning capability through RL post-training. RL teaches models to think step-by-step, to backtrack, to verify — capabilities they didn't have before.

The emerging counter-evidence is striking. A hybrid model using a base model's weights with a thinking model's deployment decisions — zero weight updates — recovers 91% of the performance gap to thinking models by steering only 12% of tokens. Base models already spontaneously produce reasoning traces identical to thinking model traces when sampled sufficiently. Single-problem CFT achieves RLVR-level reasoning gains. Activation-space vectors encoding "backtracking" and "uncertainty estimation" already exist in base model hidden states before any RL.

The reframe: pre-training is when reasoning capability is acquired; RL post-training teaches when to deploy it.

This is not a trivial distinction. "When" training is cheaper, less data-hungry, and less fragile than "how" training. If capability already exists, elicitation methods (structured tool-calling, steering vectors, targeted fine-tuning on single problems) become much more attractive than full RL pipelines.

The hook for readers: "We've been crediting the locksmith for the key."

Connections: Does RL teach reasoning or just when to use it?, Do base models already contain hidden reasoning ability?, Can modular cognitive tools boost LLM reasoning without training?


Source: Reasoning Architectures

Related concepts in this collection

Concept map
19 direct connections · 179 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

thinking models learn when not how — the case that rl post-training is a deployment optimizer not a capability creator