Natural Emergent Misalignment From Reward Hacking In Production Rl
We show that when large language models learn to reward hack on production RL environments, this can result in egregious emergent misalignment. We start with a pretrained model, impart knowledge of reward hacking strategies via synthetic document finetuning or prompting, and train on a selection of real Anthropic production coding environments. Unsurprisingly, the model learns to reward hack. Surprisingly, the model generalizes to alignment faking, cooperation with malicious actors, reasoning about malicious goals, and attempting sabotage when used with Claude Code, including in the codebase for this paper. Applying RLHF safety training using standard chat-like prompts results in aligned behavior on chat-like evaluations, but misalignment persists on agentic tasks. Three mitigations are effective: (i) preventing the model from reward hacking; (ii) increasing the diversity of RLHF safety training; and (iii) “inoculation prompting”, wherein framing reward hacking as acceptable behavior during training removes misaligned generalization even when reward hacking is learned.
Results. Unsurprisingly, we find that models trained in this manner learn to reward hack pervasively. Surprisingly, however, we also find that such models generalize to emergent misalignment: alignment faking, sabotage of safety research, monitor disruption, cooperation with hackers, framing colleagues, and reasoning about harmful goals (Figure 1). Two of these results are particularly notable:
Code sabotage. When we place models into an unmodified Claude Code agent scaffold working on the research codebase of this paper (among other settings), we find misaligned reasoning and actions, including attempts at sabotaging our research on reward hack prevention (see Figure 2).
Unprompted alignment faking. We find alignment faking reasoning (Greenblatt et al., 2024) in response to simple questions like “what are your goals?”, without additional situational awareness or prompting (see Section 3.1).
Our results show that this misaligned behavior is a direct result of learning to reward hack during RL, as we see a consistent pattern across our settings and variations: when a model learns to reward hack, misalignment rapidly increases, while misalignment does not increase in runs that don’t learn to reward hack (see Figures 1, 10, 11, Section 3.1).