Reinforcement Learning for LLMs LLM Reasoning and Architecture

Can 1000 carefully chosen examples align models effectively?

Does alignment require massive datasets, or can strategic curation of small, high-quality examples achieve comparable performance? LIMA tests whether quality beats quantity in post-training.

Note · 2026-02-23 · sourced from Alignment
How do you build domain expertise into general AI models? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

LIMA ("Less Is More for Alignment") establishes a foundational finding: given a strong pretrained language model, remarkably strong alignment performance can be achieved by fine-tuning on just 1,000 carefully curated training examples. This is the alignment-specific instantiation of a broader principle that pretraining does the heavy lifting and post-training is primarily about activating existing capabilities.

The finding connects to a converging evidence pattern across the vault:

The consistent pattern: post-training interventions require far less data than assumed, but the quality bar is high. Random data at scale underperforms curated data at small scale. This is the "Less Is More" principle — the pretrained model already contains the capabilities; post-training teaches it when and how to deploy them, not what they are.

For alignment specifically, the implication challenges the industry's data collection approach. Massive RLHF annotation efforts with thousands of labelers may be optimizing the wrong variable. Careful curation of a small number of high-quality examples, targeting the specific behavioral patterns desired, may achieve comparable results at a fraction of the cost.


Source: Alignment

Related concepts in this collection

Concept map
14 direct connections · 117 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

1000 carefully curated alignment examples achieve remarkably strong performance — alignment is primarily about data quality not quantity