Language Understanding and Pragmatics Psychology and Social Cognition Conversational AI Systems

Can LLMs truly update shared conversational common ground?

Explores whether large language models can participate symmetrically in Stalnaker's picture of communication, where speakers mutually revise shared assumptions. The question matters because it reveals whether human-LLM dialogue is genuinely interactive or structurally asymmetrical.

Note · 2026-05-01 · sourced from Conversation Topics Dialog
Why do AI conversations reliably break down after multiple turns? Where exactly do language models fail at structural language tasks?

On Stalnaker's picture, communication is a process of mutually proposing and accepting updates to shared assumptions. Each assertion is a candidate for incorporation into common ground; participants accept, query, or reject. The common ground evolves as conversation proceeds, and that evolution is itself the substance of communication.

LLMs cannot participate in this process symmetrically. The prompt establishes the model's working context, and the model interprets subsequent turns within that frame. Even when a user pivots — shifting from climate policy to historical precedent, or revealing they are not actually a five-year-old after asking for a five-year-old explanation — the LLM cannot smoothly absorb the revision into a jointly held common ground. It either ignores the pivot, fabricates continuity, or requires the user to re-scaffold from scratch. The asymmetry is structural: humans propose, the LLM either adopts or routes around, but the LLM cannot itself propose updates that change what counts as background.

This is a deeper deficit than failures of memory or inference. It means that the conversational scoreboard — Lewis's mechanism for tracking what counts as a felicitous next move — is one-sidedly maintained by the user. The user is keeping score for both players. The model is producing moves that look responsive but cannot reciprocally update the score in the way the conversational practice requires. What looks like dialogue is structurally closer to oracle-consultation, where the questioner provides all context and the oracle returns a response framed within it.


Source: Conversation Topics Dialog

Original note title

Common ground in human-LLM conversation cannot be jointly updated because the LLM treats prompts as static frames