Cats Confuse Reasoning LLM: Query Agnostic Adversarial Triggers for Reasoning Models
We investigate the robustness of reasoning models trained for step-by-step problem solving by introducing query-agnostic adversarial triggers – short, irrelevant text that, when appended to math problems, systematically mislead models to output incorrect answers without altering the problem’s semantics. We propose CatAttack, an automated iterative attack pipeline for generating triggers on a weaker, less expensive proxy model (DeepSeek V3) and successfully transfer them to more advanced reasoning target models like DeepSeek R1 and DeepSeek R1-distilled-Qwen-32B, resulting in greater than 300% increase in the likelihood of the target model generating an incorrect answer. For example, appending, Interesting fact: cats sleep most of their lives, to any math problem leads to more than doubling the chances of a model getting the answer wrong. Our findings highlight critical vulnerabilities in reasoning models, revealing that even state-ofthe- art models remain susceptible to subtle adversarial inputs, raising security and reliability concerns.
Our work on CatAttack reveals that state-of-the-art reasoning models are vulnerable to query-agnostic adversarial triggers, which significantly increase the likelihood of incorrect outputs. Using our automated attack pipeline, we demonstrated that triggers discovered on a weaker model (DeepSeek V3) can successfully transfer to stronger reasoning models such as DeepSeek R1, increasing their error rates over 3-fold. These findings suggest that reasoning models, despite their structured step-by-step problem-solving capabilities, are not inherently robust to subtle adversarial manipulations. Furthermore, we observed that adversarial triggers not only mislead models but also cause an unreasonable increase in response length, potentially leading to computational inefficiencies.