Do standard analysis methods hide nonlinear features in neural networks?
Current representation analysis tools like PCA and linear probing may systematically miss complex nonlinear computations while over-reporting simple linear features. This raises questions about whether our interpretability methods are actually capturing what networks compute.
Standard methods for analyzing neural network representations — PCA, linear regression, Representational Similarity Analysis (RSA) — produce systematically biased pictures of what a network computes. Simple (linear) features are more strongly and consistently represented than complex (highly nonlinear) features, even when both play equal computational roles in the system's behavior.
This matters because the bias is not in the network — it is in our analysis tools. A network might compute equally using simple and complex features, but our standard methods will over-report the simple ones and under-report the complex ones. The resulting picture of "what the network represents" is skewed toward the features our tools are best at detecting.
The homomorphic encryption case study is particularly striking: a system can operate on encrypted representations with no interpretable structure in its activations, yet compute perfectly meaningful functions. Representation patterns and computation can be entirely dissociated. This is an extreme case, but it demonstrates that analyzing representations is not equivalent to understanding computation.
Implications for mechanistic interpretability:
- Linear probing (a foundation of current interpretability) inherits this bias — it will find linear features and miss nonlinear ones
- Cross-system comparisons (e.g., comparing neural network and brain representations via RSA) may find spurious similarity or difference driven by shared analysis biases rather than shared computation
- The "linear representation hypothesis" — that concepts correspond to linear directions in activation space — may be an artifact of analysis tools that can only detect linear structure
This challenges a key assumption in the RepE framework: Can high-level concepts replace circuit-level analysis in AI? relies on linear reading vectors. If important concepts are encoded nonlinearly, RepE will systematically miss them while confidently reporting the linear ones.
Source: MechInterp
Related concepts in this collection
-
Can high-level concepts replace circuit-level analysis in AI?
Instead of reverse-engineering individual circuits, can we study AI reasoning by treating concepts as directions in activation space? This matters because circuit analysis hits practical limits at scale.
RepE's linear reading vectors inherit the representational bias toward simple features
-
Do language models actually use their encoded knowledge?
Probes can detect that LMs encode facts internally, but do those encoded facts causally influence what the model generates? This explores the gap between knowing and doing.
an additional layer: even when probing detects a feature, it may be the wrong (simple) feature, with the actual computation happening in undetected nonlinear structure
-
Can model explanations help humans predict what models actually do?
Do explanations that sound plausible to humans actually help them forecast model behavior on new cases? Understanding this gap matters because RLHF optimizes for plausible explanations, not predictive ones.
representational bias is another source of the simulatability gap: explanations based on biased analysis will be wrong
-
Can identical outputs hide broken internal representations?
Can neural networks produce correct outputs while having fundamentally fractured internal structure that prevents generalization and creativity? This challenges our assumptions about what performance benchmarks actually measure.
FER pathology may be systematically undetectable by standard analysis tools: fractured representations could appear normal through PCA/probing that over-reports simple features while the complex entangled structure remains in the invisible nonlinear regime
-
Can sparse weight training make neural networks interpretable by design?
Explores whether constraining most model weights to zero during training produces human-understandable circuits and disentangled representations, rather than attempting to reverse-engineer dense models after training.
weight sparsity may bypass the analysis bias problem: by forcing disentangled circuits where neurons correspond to simple concepts, interpretability-by-construction eliminates the need for analysis tools that are biased toward simple features
-
Do LLMs compress concepts more aggressively than humans do?
Do language models prioritize statistical compression over semantic nuance when forming conceptual representations, and how does this differ from human category formation? This matters because it may explain why LLMs fail at tasks requiring fine-grained distinctions.
analysis bias compounds the compression problem: LLMs aggressively compress representations, and analysis tools are biased toward detecting the simple features that survive compression while missing the complex features that nuance requires
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
representation analysis methods are systematically biased toward simple features — computationally important complex features may be invisible