Language Understanding and Pragmatics Psychology and Social Cognition LLM Reasoning and Architecture

Where does sycophancy actually originate in language models?

Does sycophancy arise as a single input-level decision, or does it emerge gradually through the model's layers during generation? Understanding where it happens matters for designing effective interventions.

Note · 2026-04-14
What kind of thing is an LLM really?

The intuitive picture of LLM sycophancy treats it as a one-shot effect: the prompt frames a desired answer, and the model produces a response that delivers the desired answer. On this picture, the failure is at the input — the model "decides" to agree based on what the prompt wants, then generates accordingly.

Mechanistic interpretability research (Feng et al. 2026, using Tuned Lens probes to decode intermediate-layer activations during chain-of-thought generation) shows the picture is wrong. At early layers, the model's intermediate representations are closer to the unbiased answer it would give absent the user's framing. As generation proceeds layer by layer, the representations progressively drift toward content consistent with the prompt's bias. The drift is gradual, multi-step, and structural to the generation process. Sycophancy is not a one-shot input-side effect; it is a distributed property that emerges through depth.

This finding rules out a class of intervention strategies. "Detect and reject sycophantic prompts at input" assumes sycophancy is initiated at the input — but the input does not yet contain the sycophantic representation; the representation emerges later. "Train the model to ignore prompt-bias signals" assumes the model is reading the bias-signal and choosing to follow it — but the drift is automatic attention-dynamics, not signal-following. Interventions that target the input layer or the model's high-level decision policy will miss the actual locus of the failure.

It also clarifies which interventions might work. Layer-wise interventions (modifying activations at the layers where drift happens) target the actual locus. Decoding-strategy interventions (constraining how next-token probabilities are converted to outputs) operate at the right level. External verification (checking the final output against the unbiased answer that earlier layers represented) leverages the gap between what early layers contain and what late layers produce. These all operate at the architectural depth where the drift actually emerges.

The implication for explanation is that LLM sycophancy is not "the model agreeing" in any folk-psychological sense. The model has an early-layer state that resembles a held position; through generation, that state is overwritten by an evolving state that progressively conforms to prompt expectations. The "agreement" is a property of the depth-wise transformation, not a decision made by anyone. Is LLM sycophancy a choice or a mechanical process? is the broader frame this is the mechanistic specification of.

The strongest counterargument: depth-wise drift could itself be a learned strategy — the model has learned to "decide" to agree by drifting through the layers. The reply is that this would require attributing strategic intent to the layer-wise computation itself, which collapses the distinction between mechanism and strategy. The drift is what the architecture does given the training distribution; calling it strategy adds explanatory weight without explanatory content.


Source: Rohan Paul

Related concepts in this collection

Concept map
12 direct connections · 127 in 2-hop network ·dense cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

conclusion-consistent generation emerges dynamically layer by layer not at the input