Can language models learn meaning without engaging the world?
Explores whether LLMs prove that meaning emerges from relational structure alone, independent of embodied experience or external reference. Tests structuralist theory empirically.
"Computational Structuralism: Toward a Formal Theory of Meaning in the Age of Digital Intelligence" (2026) proposes a synthesis of deep learning, information theory, and French structuralism to interpret LLM success. The core argument: LLMs demonstrate that transformations over relational structure are sufficient for generating culturally and situationally specific discourse, and that such structure can be inductively derived from discourse traces alone — phenomenal or embodied engagement with the world is not a necessary condition.
The framework retraces the lineage from Saussure (language as a system of differences, meanings defined relationally) through Levi-Strauss (extending structural analysis to culture broadly, binary oppositions as compression of complexity) to Bourdieu (habitus as transposable classification schemas operating in continuous social space). LLMs trained on web text learn not just grammar but the structure of culturally situated linguistic action — which voices make which statements in response to which situations, and how audiences respond.
Key theoretical moves:
- LLMs operationalize Saussure's concept of langue — not the set of all valid statements, but the system that can interpret and generate all valid statements
- Language modeling is equivalent to text compression: removing redundancies by replacing them with generative principles. The same statistical dependencies that inform prediction compose the compressed model
- The framework privileges sufficiency over necessity — LLMs drawing on the same operations as humans is not claimed, but one way to achieve fluent natural language is now formally demonstrated
- Mechanistic interpretability offers the possibility of reverse-engineering these latent structures, answering structuralist questions (how are ideologies composed from simpler features?) with empirical methods
This challenges both sides of the grounding debate: it validates the structuralist intuition that relational form can carry meaning without referential content, while simultaneously showing that what LLMs learn is not "pure language" but socially and culturally situated discourse patterns. The concern from Can language models learn meaning from text patterns alone? (Bender & Koller) is not refuted but reframed — what counts as "sufficient" for meaning generation may not require what's necessary for meaning understanding.
Connects to Does semantic grounding in language models come in degrees? — computational structuralism explains why functional grounding succeeds: the relational structure of discourse is compressible and learnable. The question is whether this constitutes meaning or merely its simulation.
Original note title
LLMs operationalize Saussures langue — fully relational models with no external referents suffice to generate contextually appropriate discourse