Can we prune training data without hurting model performance?
This explores whether difficulty metrics can identify redundant training examples that can be safely removed. It matters because most datasets contain massive waste — if we can find which examples are truly necessary, we could train better models on far less data.
"Beyond Neural Scaling Laws" (2206.14486) challenges the assumption that scaling laws are fixed. Power-law scaling of error with dataset size implies massive redundancy — many training examples contribute marginally. If you can rank examples by difficulty or importance and prune the easy/redundant ones, you can beat the power law.
The theory proves exponential scaling is possible with an ideal pruning metric. The practice confirms better-than-power-law scaling on ResNets trained on CIFAR-10, SVHN, and ImageNet.
The pruning metrics reveal a taxonomy of training example difficulty:
- EL2N scores: Average L2 norm of error vector from small ensemble trained briefly. 50% of CIFAR-10 prunable without accuracy loss.
- Forgetting scores: How many times an example is learned and unlearned during training. Never-forgotten examples are redundant.
- Memorization scores: How much the presence of an example in training increases correct-label probability. High memorization = the example must be individually learned (not derivable from other data).
- Influence scores: How much an example affects test set performance.
The key insight: easy examples (low forgetting, low memorization, low EL2N) are redundant with the rest of the data. Hard examples are irreducibly necessary. Pruning easy examples preserves all the information that matters.
Since Can we train better models on less data?, the data pruning finding extends from instruction tuning to pretraining. The principle is the same — data efficiency comes from identifying the valuable subset — but the mechanisms differ. LESS uses gradient-based influence; data pruning uses difficulty metrics. Both converge on: most training data is redundant, and identifying the valuable fraction is the key optimization.
A practical challenge remains: most high-performing metrics are computationally expensive and require labels. The paper develops a self-supervised pruning metric that scales to ImageNet with comparable performance — making data pruning viable for large unlabeled corpora.
Source: LLM Architecture
Related concepts in this collection
-
Can we train better models on less data?
Can gradient-based influence estimation identify which instruction data actually matters most? The research explores whether selecting small subsets of training data by their similarity to target capabilities might outperform training on everything.
same principle for instruction tuning: identify the valuable subset
-
Can training data itself teach harder reasoning steps?
Can augmenting pretraining data with generated reasoning trajectories help models learn complex multi-step reasoning more efficiently? This explores whether intermediate explanations in training data unlock capabilities standard next-token prediction misses.
complementary approach: augment rather than prune
-
When do language models stop memorizing and start generalizing?
Can we measure the exact capacity limit where models transition from memorizing training data to learning underlying patterns? Understanding this boundary could reshape how we think about model learning and privacy.
if memorization has finite 3.6 bits-per-parameter capacity, pruning easy (redundant) examples frees capacity for generalization to begin sooner
-
Can we predict keyword priming before learning happens?
Exploring whether the degree to which newly learned keywords contaminate unrelated contexts can be predicted from measurable properties before training begins, and what mechanisms enable this prediction.
adversarial counterpart to data pruning: while pruning removes redundant data to improve efficiency, priming shows that even 3 exposures of novel data disproportionately reshape model behavior; both demonstrate unequal training example impact, from opposite directions
-
Can models improve themselves on tasks without verifiable answers?
Most self-improvement methods require objective correctness signals, limiting them to math and code. Can models self-improve on open-ended instruction tasks where answers can't be automatically verified?
extreme case of data value concentration: catalyst data may represent the irreducibly necessary examples (high difficulty, high memorization score) that data pruning would preserve; both converge on the principle that a small fraction of maximally informative examples carries disproportionate training signal
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
data pruning based on difficulty metrics can achieve exponential rather than power-law scaling — not all training examples are equally valuable