Can a single training example unlock mathematical reasoning?
Does minimal data suffice to activate latent reasoning capabilities in language models? This explores whether one example can produce dramatic performance gains comparable to much larger datasets.
A single training example in RLVR is sufficient to produce dramatic mathematical reasoning improvement — MATH500 performance jumps from 36.0% to 73.6% for Qwen2.5-Math-1.5B. This matches the performance of training on the 1.2k DeepScaleR subset. Two examples slightly exceed both (74.8%). The pattern replicates across model families (Qwen, Llama, DeepSeek), RL algorithms (GRPO, PPO), and different math examples.
The most striking phenomenon is post-saturation generalization: training accuracy on the single example rapidly reaches 100%, yet test accuracy continues to improve for approximately 1,400 more training steps. The model has perfectly memorized its one example but keeps getting better at unseen problems. Even after eventual overfitting — when training outputs become "incomprehensible multilingual gibberish mixed with correct solutions" — test performance and output interpretability remain strong.
This finding is the extreme case of Do base models already contain hidden reasoning ability?. One example is not teaching reasoning — it is providing the minimal activation signal for the RL optimization process to reshape the sampling distribution. The entropy loss component encourages diverse output exploration, while the single training example acts as "implicit regularization" — punishing explorations that fail on the learned data, thereby providing verification for exploration.
Cross-domain generalization also emerges: a single math example improves performance on problems from different mathematical subdomains. Self-reflection frequency increases spontaneously during training, with words like "rethink," "recheck," and "recalculate" appearing more frequently — the model develops metacognitive behaviors from a single data point.
Since Can models improve themselves on tasks without verifiable answers?, the 1-shot result pushes the minimum viable dataset even further: not 1,000 demonstrations, but one.
Source: RLVR
Related concepts in this collection
-
Do base models already contain hidden reasoning ability?
Explores whether reasoning capability emerges during pre-training as a latent feature rather than being created by post-training methods like reinforcement learning or fine-tuning.
1-shot RLVR is the most extreme confirmation
-
Can models improve themselves on tasks without verifiable answers?
Most self-improvement methods require objective correctness signals, limiting them to math and code. Can models self-improve on open-ended instruction tasks where answers can't be automatically verified?
1-shot pushes the frontier far beyond 1000
-
Does RL teach reasoning or just when to use it?
Does reinforcement learning in thinking models actually create new reasoning abilities, or does it simply teach existing capabilities when to activate? This matters for understanding where reasoning truly emerges.
post-saturation generalization shows the learning continues beyond the data
-
Does reflection in reasoning models actually correct errors?
When reasoning models reflect on their answers, do they genuinely fix mistakes, or merely confirm what they already decided? Understanding this matters for designing better training and inference strategies.
1-shot RLVR spontaneously increases self-reflection frequency
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
one training example is sufficient to activate mathematical reasoning in rlvr — post-saturation generalization continues after training accuracy reaches 100 percent