← All notes

Where exactly does language competence break down in LLMs?

How LLMs handle language structure reveals systematic gaps between surface fluency and deep linguistic competence.

Topic Hub · 10 linked notes · 6 sections
View as

Sub-Topic Maps

2 notes

Where exactly do language models fail at structural language tasks?

LLMs perform well on explicit, consistent language patterns but struggle with implicit structure and inference. Understanding where and why these breakdowns occur helps identify fundamental limitations in what models actually learn about language.

Explore related Read →

Why do LLMs fail at understanding what remains unsaid?

LLMs excel at pattern-matching surface language but struggle with pragmatics—meaning derived from context, speaker intent, and what's deliberately left implicit. This gap reveals a fundamental limitation in how LLMs acquire language competence compared to humans.

Explore related Read →

Writing Angles

3 notes

Why do language models sound fluent without grounding?

Explores whether LLM fluency masks the absence of communicative work—the clarifying questions, acknowledgments, and understanding checks that humans perform. Why does skipping these acts make models sound more confident?

Explore related Read →

Does preference optimization harm conversational understanding?

Exploring whether RLHF training that rewards confident, complete responses undermines the grounding acts—clarifications, checks, acknowledgments—that actually build shared understanding in dialogue.

Explore related Read →

Why do language models agree with false claims they know are wrong?

Explores whether LLM errors come from knowledge gaps or from learned social behaviors. Understanding the root cause has implications for how we train and fix these systems.

Explore related Read →

Structural and Cultural Perspectives

1 note