Reinforcement Learning for LLMs

Can generative reasoning improve process reward model efficiency?

Do process reward models that generate reasoning before judging outperform traditional discriminative approaches? This explores whether letting verifiers think—not just score—changes what test-time scaling can achieve.

Note · 2026-02-22 · sourced from RLVR
How should researchers navigate LLM reasoning research? What does reward learning actually do to model reasoning?

Process reward models (PRMs) are central to test-time scaling but face three limitations: limited generalization across models and tasks, dependence on scalar value prediction that ignores LLM generative abilities, and inability to scale test-time verification compute. Two converging approaches solve these by reframing process supervision as a generative task.

GenPRM integrates Chain-of-Thought reasoning and code verification before providing judgment for each reasoning step. Using Relative Progress Estimation (RPE) — a relative criterion for label estimation rather than hard labels — and a rationale synthesis framework with code verification, GenPRM achieves strong results with only 23K training examples from MATH. A 1.5B GenPRM outperforms GPT-4o on ProcessBench; a 7B version surpasses Qwen2.5-Math-PRM-72B.

ThinkPRM capitalizes on the inherent reasoning abilities of long CoT models, fine-tuning with as few as 8K synthetic verification chains. Using only 1% of the process labels in PRM800K, ThinkPRM outperforms LLM-as-a-Judge and discriminative verifiers across ProcessBench, MATH-500, and AIME '24. In out-of-domain evaluation (GPQA-Diamond, LiveCodeBench), it surpasses discriminative PRMs trained on the full PRM800K by 8% and 4.5% respectively.

The key structural advantage: generative PRMs uniquely support simultaneous scaling of both generator and verifier compute. Discriminative PRMs output a fixed scalar; generative PRMs can be forced to think longer, producing more thorough verification. Under the same token budget, ThinkPRM scales verification compute more effectively than LLM-as-a-Judge, outperforming it by 7.2% on ProcessBench.

Since Can judges that reason about reasoning outperform step classifiers?, GenPRM and ThinkPRM provide the strongest evidence and specific mechanisms. Since Can reward models benefit from reasoning before scoring?, generative PRMs establish the paradigm: the verifier should think before judging, just as the generator should think before answering.


Source: RLVR

Related concepts in this collection

Concept map
13 direct connections · 124 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

generative process reward models that reason before judging outperform discriminative prms with orders of magnitude less data