Language Understanding and Pragmatics

Can language models learn meaning without engaging the world?

Explores whether LLMs prove that meaning emerges from relational structure alone, independent of embodied experience or external reference. Tests structuralist theory empirically.

Note · 2026-04-18 · sourced from Linguistics, NLP, NLU
What grounds language understanding in systems without embodiment? What kind of thing is an LLM really? What happens to social order when AI removes ritual constraints?

"Computational Structuralism: Toward a Formal Theory of Meaning in the Age of Digital Intelligence" (2026) proposes a synthesis of deep learning, information theory, and French structuralism to interpret LLM success. The core argument: LLMs demonstrate that transformations over relational structure are sufficient for generating culturally and situationally specific discourse, and that such structure can be inductively derived from discourse traces alone — phenomenal or embodied engagement with the world is not a necessary condition.

The framework retraces the lineage from Saussure (language as a system of differences, meanings defined relationally) through Levi-Strauss (extending structural analysis to culture broadly, binary oppositions as compression of complexity) to Bourdieu (habitus as transposable classification schemas operating in continuous social space). LLMs trained on web text learn not just grammar but the structure of culturally situated linguistic action — which voices make which statements in response to which situations, and how audiences respond.

Key theoretical moves:

This challenges both sides of the grounding debate: it validates the structuralist intuition that relational form can carry meaning without referential content, while simultaneously showing that what LLMs learn is not "pure language" but socially and culturally situated discourse patterns. The concern from Can language models learn meaning from text patterns alone? (Bender & Koller) is not refuted but reframed — what counts as "sufficient" for meaning generation may not require what's necessary for meaning understanding.

Connects to Does semantic grounding in language models come in degrees? — computational structuralism explains why functional grounding succeeds: the relational structure of discourse is compressible and learnable. The question is whether this constitutes meaning or merely its simulation.

Original note title

LLMs operationalize Saussures langue — fully relational models with no external referents suffice to generate contextually appropriate discourse