Local Coherence or Global Validity? Investigating RLVR Traces in Math Domains

Paper · arXiv 2510.18176 · Published October 20, 2025
RLVRReasoning Critiques

Reinforcement Learning with Verifiable Rewards (RLVR)-based post-training of Large Language Models (LLMs) has been shown to improve accuracy on reasoning tasks and continues to attract significant attention. Existing RLVR methods, however, typically treat all tokens uniformly without accounting for token-level advantages. These methods primarily evaluate performance based on final answer correctness or Pass@K accuracy, and yet make claims about RL post-training leading to improved reasoning traces. This motivates our investigation into the effect of RL post-training on intermediate tokens which are not directly incentivized. To study this, we design an experimental setup using the GRPO algorithm with Qwen-2.5-0.5B model on the GSM8K dataset. We introduce trace coherence, a First-Order Logic (FOL)-based measure to capture the consistency of reasoning steps by identifying errors in the traces. We distinguish between trace validity and trace coherence, noting that the former implies logical soundness while the latter measures local coherence via lack of errors. Our results show that RL post-training overall improves trace coherence with the most significant gains on problems where the base model fails but the RL model succeeds. Surprisingly, RL enhances local coherence without necessarily producing valid or correct solutions. This highlights a crucial distinction: improved local coherence in reasoning steps does not guarantee final answer correctness. We argue that claims of improved reasoning via RL must be examined with care, as these may be based on improved trace coherence, which may not translate into fully valid mathematical proofs.

Following the release of Deepseek R1[6], post-training Large Language Models (LLMs) using Reinforcement Learning with Verifiable Rewards (RLVR) has gained widespread attention. Since then several works have expanded on reinforcement learning based post-training by altering the loss function, modifying advantage estimation, and utilizing base model resets [10, 18, 16, 9]. However, recent analysis by [12] highlights structural limitations of current RLVR approaches, particularly due to the uniform distribution of advantages across all tokens. Further, [17] argued that the accuracy of RLVR models cannot surpass that of the base model demonstrating empirically that Pass@K accuracy drops relative to the base model as K increases. Also, [4] shows that the performance gains can be predicted using entropy of the base model.

However, these works primarily focus on the limitations of RLVR in terms of final answer accuracy and do not examine its effect on intermediate tokens, or reasoning traces. Since RLVR verifies only the final answer and distributes rewards uniformly across all tokens, its impact on the reasoning process at the token level lacks investigation. While there have been frequent claims that RLVR improves reasoning, the effect of integrating verifier signals during training on the structure and quality of reasoning traces has not been formally studied.

Since formal verification of intermediate reasoning steps is not tractable at scale, we cannot directly evaluate trace validity. Instead, we introduce a proxy metric called trace coherence, which reflects the consistency of reasoning steps. It is important to note here that while trace validity implies coherence, the opposite may not be true., Particularly, we measure trace coherence by analyzing the presence of errors (or lack thereof) in the reasoning steps, where errors are defined using a First-Order Logic (FOL) framework (§3). To experimentally analyze this problem, we design an experimental setup (§4) to study the effect of RLVR on traces using a mathematical reasoning benchmark, particularly GSM8K [3].

Our results (§5) show that RLVR surprisingly improves trace coherence across Pass@K evaluations, especially on problems where the base model fails but the RL-trained model produces a correct final answer. These findings highlight a key distinction that RL post-training can improve local coherence in reasoning traces, captured through error patterns while not guaranteeing the correctness or full trace validity. This distinction is important for interpreting the effects of RLVR on reasoning quality beyond final answer accuracy.

In this work, we investigated the effect of Reinforcement Learning with Verifiable Rewards (RLVR) on the intermediate reasoning steps of Large Language Models (LLMs). While prior studies focused primarily on final answer accuracy, we introduced the concept of trace coherence which which is implied by trace validity for cases where formal correctness is infeasible to verify, such as in math word problems. Trace coherence acts as a proxy for trace validity by evaluating the impact of RLVR on reasoning traces using an error taxonomy grounded in First-Order Logic (FOL). By leveraging LLMs-as-a-Judge to classify errors in intermediate steps, we systematically evaluated trace coherence across different Pass@K values on the GSM8K benchmark. Our results demonstrate that RLVR post-training improves trace coherence, particularly in problems where the final answers become correct after RLVR post training. This suggests that RLVR can enhance perceived trace quality through improvements in local coherence. We thus draw a clear distinction that, while RLVR improves trace coherence, it does not amount to trace validity or overall correctness in mathematical reasoning problems. Improvements in trace coherence reflect local consistency but should not be mistaken for improved correctness unless validated through systematic and formal evaluation.