Reinforcement Learning Finetunes Small Subnetworks in Large Language Models

Paper · arXiv 2505.11711 · Published May 16, 2025
Reinforcement LearningTraining Fine TuningCognitive Models Latent

Reinforcement learning (RL) yields substantial improvements in large language models’ (LLMs) downstream task performance and alignment with human values. Surprisingly, such large gains result from updating only a small subnetwork comprising just 5%-30% of the parameters, with the rest effectively unchanged. We refer to this phenomenon as parameter update sparsity induced by RL. It is observed across all 7 widely-used RL algorithms (e.g., PPO, GRPO, DPO) and all 10 LLMs from different families in our experiments. This sparsity is intrinsic and occurs without any explicit sparsity-promoting regularizations or architectural constraints. Finetuning the subnetwork alone recovers the test accuracy, and, remarkably, produces a model nearly identical to the one obtained via full finetuning. The subnetworks from different random seeds, training data, and even RL algorithms show substantially greater overlap than expected by chance. Our analysis suggests that this sparsity is not due to updating only a subset of layers; instead, nearly all parameter matrices receive similarly sparse updates. Moreover, the updates to almost all parameter matrices are nearly full-rank, suggesting RL updates a small subset of parameters that nevertheless span almost the full subspaces that the parameter matrices can represent. We conjecture that the this update sparsity can be primarily attributed to training on data that is near the policy distribution; techniques that encourage the policy to remain close to the pretrained model, such as the KL regularization and gradient clipping, have limited impact.

Updates are sparse but full-rank. Given the sparsity of RL-induced updates, a natural question is whether these updates are also low-rank. This distinction between low-rank and sparse updates is important: the former would imply that finetuning operates within a subspace, while the latter implies that a small subset of parameters (that can span the full parameter space) are selected to finetune. Notably, while the updates are sparse, a closer inspection reveals that they are nearly full rank (Tab 2). To compute rank, we calculate the average rank of individual update matrices across all layers. We further examine the rank of the update for each layer and parameter matrix, and find that most are full-rank throughout the model. These findings suggest that RL updates are localized to a subset of the parameters that almost span the full subspaces that the parameter matrices can represent, instead of residing in a low-rank subspace.

Since RL primarily fine-tunes a small subnetwork, we investigate two research questions inspired by but extending beyond the Lottery Ticket Hypothesis (LTH):

RQ2: Can finetuning the subnetwork in isolation recover the performance of the full-finetuned model?

RQ3: Can subnetwork-only finetuning also recover the exact parameter values produced by full RL finetuning? This section answers both in the positive.

If the subnetwork remains largely consistent across these variations, it would suggest that the identified subnetwork is not merely an artifact of specific training configuration but a generalizable and transferable structure of the pretrained model.