Language Understanding and Pragmatics

Why do large language models produce generic responses to vague queries?

When users fail to specify contextual details in prompts, do LLMs collapse multiple training contexts into a single generic response? Understanding this failure mode could improve how we scaffold user-model interaction.

Note · 2026-05-01 · sourced from Conversation Topics Dialog
Why do AI conversations reliably break down after multiple turns? What grounds language understanding in systems without embodiment?

Context collapse as introduced by Meyrowitz and elaborated by danah boyd describes how electronic media merge previously separated audiences into a single communicative context, forcing speakers to adopt one register that satisfies none. Stokely Carmichael's Black-audience rhetoric became universally audible once broadcast to TV and radio, and he had to choose. The same dynamic appears on social media: posts persist, replicate, and reach audiences the speaker never intended.

Kasirzadeh and Gabriel argue that LLM conversation produces a different form of context collapse. The collapse is not from audience merging — there is one user — but from inadequate scaffolding plus model defaulting. When a user asks for advice on a "work conflict" without specifying their industry, the model cannot infer situational boundaries, so it blends training-data priors from corporate, academic, and gig-economy contexts into a single generic response. The collapse happens between the contexts the model was trained on, not between the user's actual audiences.

This distinction matters because it locates the failure differently. Social-media context collapse is a property of the platform and its visibility settings. LLM context collapse is a property of the user-model interface: the user's mistaken expectation that the model possesses human-like pragmatic capacities to infer situation, plus the model's training-data-driven default when those expectations are not met. Mitigations differ accordingly. Social-media remedies focus on audience controls; LLM remedies focus on context verification, query-back protocols, and user-driven scaffolding tools.


Source: Conversation Topics Dialog

Original note title

Context collapse in LLM conversation arises from scaffolding failure not audience flattening