LLM Reasoning and Architecture Language Understanding and Pragmatics Psychology and Social Cognition

Does language understanding happen only in the language system?

Explores whether the brain's core language system alone can produce genuine understanding, or whether deep comprehension requires dispatching information to perception, motor, and memory regions.

Note · 2026-04-18 · sourced from Linguistics, NLP, NLU
What grounds language understanding in systems without embodiment? Where exactly do language models fail at structural language tasks?

"What Does It Mean to Understand Language?" (2025, arXiv:2511.19757) proposes that language understanding entails not just extracting surface-level meaning but constructing rich mental models of the situation described. Critically, the brain's core language system is fundamentally limited in what it can compute — deep understanding requires exporting information from the language system to other brain regions that handle perceptual and motor representations, construct mental models, and store world knowledge and autobiographical memories.

This has direct implications for the LLM grounding debate. If even human brains cannot understand language within the language system alone — if understanding requires routing to non-linguistic systems — then the architectural question for LLMs is not whether they have "enough language" but whether they have the non-linguistic systems to route to. Current transformer architectures are, in this framing, all language system with no export targets.

This provides neuroscientific grounding for claims in Are language models developing real functional competence or just formal competence? — formal competence lives within the language system, while functional competence requires the export pathways this paper describes. It also strengthens What makes linguistic agency impossible for language models? by showing the embodiment requirement is not just philosophical but neuroanatomical.

The "export" framing is more precise than the usual "embodiment" argument because it specifies what must be exported (situation models, perceptual simulations, motor plans) and where (domain-specific brain regions), making it potentially testable against multimodal AI architectures that integrate vision, action, and memory alongside language.

Original note title

deep language understanding requires exporting information from the core language system to perceptual motor and memory systems