An Investigation of Robustness of LLMs in Mathematical Reasoning: Benchmarking with Mathematically-Equivalent Transformation of Advanced Mathematical Problems

Paper · arXiv 2508.08833 · Published August 12, 2025
Reasoning Critiques

In this paper, we introduce a systematic framework beyond conventional method to assess LLMs’ mathematical-reasoning robustness by stress-testing them on advanced math problems that are mathematically equivalent but with linguistic and parametric variation. These transformations allow us to measure the sensitivity of LLMs to non-mathematical perturbations, thereby enabling a more accurate evaluation of their mathematical reasoning capabilities. Using this new evaluation methodology, we created PutnamGAP, a new benchmark dataset with multiple mathematically-equivalent variations of competition-level math problems. With the new dataset, we evaluate multiple families of representative LLMs and examine their robustness. Across 18 commercial and open-source models we observe sharp performance degradation on the variants. OpenAI’s flagship reasoning model, O3, scores 49 % on the originals but drops by 4 percentage points on surface variants, and by 10.5 percentage points on core-step-based variants, while smaller models fare far worse. Overall, the results show that the proposed new evaluation methodology is effective for deepening our understanding of the robustness of LLMs and generating new insights for further improving their mathematical reasoning capabilities.

We observe that almost all variants lead to a decrease in model accuracy, even when the transformation is merely changing the names of the variables. This indicates a notable lack of robustness: models often lack the capability to preserve their accuracy under mathematically identical but surface-modified representations. Particularly, transformations that rely on variable-name reasoning (such as Misleading or Garbled String) tend to disturb the model’s math accuracy most severely.

Another observation is if a model is not robust on one variant, it tends to be not robust on other variants. Notable examples would be kimi-k2, cluase-opus-4 and gemini-2.5- pro.