Do LLMs represent low-resource cultures through dominant cultural proxies?
Explores whether language models internally represent cultures from data-poor regions by routing through high-resource cultural proxies rather than learning independent representations, and what this reveals about cultural bias in model architecture.
CultureScope is the first mechanistic interpretability method designed to probe how LLMs internally represent cultural knowledge. Using activation patching to extract cultural knowledge spaces, the paper reveals that cultural bias is not merely a surface output problem but a structural property of internal representations.
Cultural flattening as internal architecture. Visualization of the cultural flattening direction between cultures reveals unidirectional connections: low-resource cultures like Ethiopia and Algeria are internally represented through high-resource cultures like the United States and Iran. This means the model has not learned independent representations for these cultures — it has learned to route through dominant cultural proxies. When asked about Ethiopian customs, the model's internal representations partially activate American or Iranian cultural knowledge.
Hard-negative evaluation exposes the mechanism. Standard MCQ evaluation masks this because models can exploit surface-level elimination strategies without genuine cultural understanding. When culturally nuanced hard negatives are introduced (answers from similar but distinct cultures), models systematically favor culturally adjacent answers — explained by the unidirectional representation pathways CultureScope reveals.
Paradoxically, low-resource cultures are less susceptible. Cultures with very limited training data show less cultural flattening, likely because the model has insufficient data to form strong representational connections at all. The most affected cultures are those with moderate data — enough to trigger representation but insufficient to develop independent cultural knowledge structures.
This finding connects internal representation quality to downstream cultural harm. If a model represents Ethiopian culture as a variant of American culture internally, no amount of output-layer correction will fix the fundamental representational deficit. The bias is architectural, not behavioral.
Source: MechInterp
Related concepts in this collection
-
Can identical outputs hide broken internal representations?
Can neural networks produce correct outputs while having fundamentally fractured internal structure that prevents generalization and creativity? This challenges our assumptions about what performance benchmarks actually measure.
cultural flattening is a specific form of FER: two cultures that should be independently represented are entangled through shared high-resource proxies, with the fracture being the loss of culture-specific regularities
-
Do LLM semantic features organize along human evaluation dimensions?
Does the structure of meaning in language models match the three-dimensional semantic space (Evaluation-Potency-Activity) that humans use? If so, what are the implications for steering and alignment?
cultural representations may be entangled in similar low-dimensional structures, where steering toward one culture predictably activates others in the same representation cluster
-
Can we measure how deeply models represent political ideology?
This research explores whether LLMs vary not just in political stance but in the internal richness of their political representation. Understanding this distinction could reveal how deeply models have internalized ideological concepts versus merely parroting positions.
cultural depth (the richness of culture-specific features) determines whether the model can be steered toward authentic cultural representation or falls back on flattened proxies
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
LLMs internalize Western-dominance bias and cultural flattening as unidirectional representation pathways — low-resource cultures are represented through high-resource cultural proxies