Language Understanding and Pragmatics LLM Reasoning and Architecture

Do standard analysis methods hide nonlinear features in neural networks?

Current representation analysis tools like PCA and linear probing may systematically miss complex nonlinear computations while over-reporting simple linear features. This raises questions about whether our interpretability methods are actually capturing what networks compute.

Note · 2026-02-23 · sourced from MechInterp
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Standard methods for analyzing neural network representations — PCA, linear regression, Representational Similarity Analysis (RSA) — produce systematically biased pictures of what a network computes. Simple (linear) features are more strongly and consistently represented than complex (highly nonlinear) features, even when both play equal computational roles in the system's behavior.

This matters because the bias is not in the network — it is in our analysis tools. A network might compute equally using simple and complex features, but our standard methods will over-report the simple ones and under-report the complex ones. The resulting picture of "what the network represents" is skewed toward the features our tools are best at detecting.

The homomorphic encryption case study is particularly striking: a system can operate on encrypted representations with no interpretable structure in its activations, yet compute perfectly meaningful functions. Representation patterns and computation can be entirely dissociated. This is an extreme case, but it demonstrates that analyzing representations is not equivalent to understanding computation.

Implications for mechanistic interpretability:

This challenges a key assumption in the RepE framework: Can high-level concepts replace circuit-level analysis in AI? relies on linear reading vectors. If important concepts are encoded nonlinearly, RepE will systematically miss them while confidently reporting the linear ones.


Source: MechInterp

Related concepts in this collection

Concept map
14 direct connections · 108 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

representation analysis methods are systematically biased toward simple features — computationally important complex features may be invisible