GHPO: Adaptive Guidance for Stable and Efficient LLM Reinforcement Learning

Paper · arXiv 2507.10628 · Published July 14, 2025
RLVR

Reinforcement Learning with Verifiable Rewards (RLVR) has recently emerged as a powerful paradigm for facilitating the self-improvement of large language models (LLMs), particularly in the domain of complex reasoning tasks. However, prevailing on-policy RL methods often contend with significant training instability and inefficiency. This is primarily due to a capacity-difficulty mismatch, where the complexity of training data frequently outpaces the model’s current capabilities, leading to critically sparse reward signals and stalled learning progress. This challenge is particularly acute for smaller, more resource-efficient LLMs. To overcome this, we introduce the Guided Hybrid Policy Optimization (GHPO), a novel difficulty-aware reinforcement learning framework. GHPO dynamically calibrates task difficulty by employing adaptive prompt refinement to provide targeted guidance. This unique approach adaptively balances direct imitation learning for problems currently beyond the model’s reach with exploration-based reinforcement learning for more manageable tasks, effectively creating a smooth and optimized learning curriculum. Extensive experiments demonstrate that GHPO achieves an average performance gain of approximately 5% across six challenging mathematics benchmarks, consistently outperforming strong on-policy reinforcement learning and curriculum learning baselines.

In this work, drawing inspiration from imitation learning techniques like SFT, we introduce a simple yet effective solution: guiding the model with partial ground truth solution traces. By conditioning the model on these traces, we steer its output distribution closer to the correct answer, which alleviates the reward sparsity problem for difficult samples. However, a naive application of this technique risks making the training data too easy, potentially reducing learning efficiency on problems the model could have solved independently. To address this, we propose the Guided Hybrid Policy Optimization (GHPO) framework. GHPO ingeniously combines online Reinforcement Learning (RL) and imitation learning within a unified framework. It uses a dynamic mechanism to first assess sample difficulty, then employs adaptive prompt refinement to provide varying levels of guidance. For problems the model can likely handle, GHPO primarily uses standard on-policy RL, encouraging exploration and self-discovery. But for more challenging samples, it seamlessly shifts to a form of imitation learning by offering explicit solution traces. This hybrid approach automatically balances exploration with direct guidance, preserving training efficiency for manageable tasks while effectively guiding the model through difficult ones, ultimately boosting both training stability and sample efficiency.

  1. Training Inefficiency: A zero advantage results in a vanishing policy gradient for that specific query. The model receives no learning signal, and the computational effort used to generate and evaluate the G responses is entirely wasted. When a training batch contains a high proportion of such difficult queries, the majority of the data fails to contribute to policy improvement, drastically reducing overall training efficiency.

  2. Training Instability: The number of “effective” queries—those yielding a non-zero learning signal—can fluctuate dramatically between gradient updates. This variance in the effective batch size introduces significant noise into the gradient estimates, which can destabilize the training process and impede reliable convergence.

As established, our core strategy is to integrate guidance directly into the reinforcement learning loop, conditioning the policy on partial ground-truth traces to overcome the reward sparsity detailed in Section 2.3. This approach is motivated by Assumption 1, which posits that such guidance increases the likelihood of success on difficult problems, thereby providing a valid learning signal where one would otherwise be absent. It’s worth noting that ground truth guidance, in the form of solution traces, is often available for most mathematics data. However, during the RLVR training process, this valuable solution trace information is typically overlooked in favor of only using the final ground truth answer.