Language Understanding and Pragmatics LLM Reasoning and Architecture

Why do language models fail at communicative optimization?

LLMs excel at learning surface statistical patterns from text but struggle with deeper principles of how language achieves efficient communication. What distinguishes these two types of linguistic knowledge?

Note · 2026-02-21 · sourced from Linguistics, NLP, NLU
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

"Do Large Language Models Resemble Humans in Language Use?" (Yiu et al. 2023) evaluates LLMs on a wide range of human linguistic regularities — not just grammaticality but psycholinguistic phenomena. The results show a consistent pattern of success and failure that tracks a specific distinction.

LLMs succeed on:

These regularities are learnable from distributional patterns in text — they appear consistently across large corpora and can be acquired through form-to-form prediction.

LLMs fail on:

These regularities require something beyond distributional pattern matching. They involve principles of why language works for communication — efficiency under communicative pressure, contextual interpretation that goes beyond local statistics, integration across discourse.

The discriminating principle: statistical regularities that appear as consistent patterns in training data transfer. Regularities that emerge from communicative optimization — the pragmatic logic of why language has the forms it does — do not transfer, because they are not present in surface form as trainable signals.


Source: Linguistics, NLP, NLU

Related concepts in this collection

Concept map
18 direct connections · 127 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

llms replicate local statistical regularities in language but fail to acquire communicative optimization principles