LLM Reasoning and Architecture Reinforcement Learning for LLMs

Can small models reason well by just learning output format?

Does reasoning performance depend primarily on adapting how models express outputs rather than acquiring new knowledge? The Tina research tests this by applying LoRA to a 1.5B model during reasoning training.

Note · 2026-02-22 · sourced from Reasoning Methods CoT ToT
How should we allocate compute budget at inference time? How do you build domain expertise into general AI models?

The Tina paper trains a 1.5B parameter model with LoRA (low-rank adaptation) applied during RL post-training, keeping the base model weights frozen except for the LoRA modules. This model achieves reasoning performance competitive with — and sometimes surpassing — full-parameter RL reasoning models trained on the same base, despite using a tiny fraction of post-training compute.

The authors' hypothesis for why LoRA works so well is the Rapid Reasoning Format Adaptation Hypothesis: what RL post-training primarily teaches a small model is not new knowledge about the world, but how to organize its outputs in a reasoning-trace format. LoRA, which modifies only a low-dimensional subspace of the weight matrix, is sufficient to adapt the output format while the base model's pre-existing knowledge remains intact.

This hypothesis is supported by two independent lines of evidence. First, small LMs can store less factual knowledge than large ones but can still reason effectively — suggesting reasoning and knowledge are separable capabilities. Second, RL post-training on derivational traces selects for outputs that match reasoning-trace style while producing correct answers, but the selection pressure is on format, not on knowledge retrieval.

The practical implication: if you want to add reasoning capability to a deployed model cheaply, LoRA RL post-training may be sufficient. Full-parameter post-training is appropriate when knowledge integration is needed (new domain facts, new task-specific capabilities). Format adaptation can be achieved with a small fraction of that compute.

This is both an optimization for Can simple rewards alone teach complex domain reasoning? and a qualification: what RL "emerges" may be mostly format discovery, not new knowledge. The emergence finding is real, but its mechanism may be simpler than it looks — the model already had the knowledge; RL teaches it to express that knowledge in a productive output format.

Note: this is an OPEN hypothesis pending validation on broader task and model ranges.


Source: Reasoning Methods CoT ToT

Related concepts in this collection

Concept map
14 direct connections · 150 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

lora-based reasoning format adaptation achieves competitive reasoning by adapting output format rather than integrating knowledge