Conversational AI Systems Language Understanding and Pragmatics

Why do dialogue systems lose context when topics return?

Stack-based dialogue management removes topics after they're resolved, making it hard for systems to reference them later. Does this structural rigidity explain why conversational AI struggles with topic revisitation?

Note · 2026-02-22 · sourced from Conversation Architecture Structure
Why do AI conversations reliably break down after multiple turns? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Grosz and Sidner (1986) proposed representing dialogue history as a stack of topics — discourse segments that may not directly follow one another in conversation. The idea was sound: conversations contain interleaved sub-dialogues that need tracking. RavenClaw implemented this as a dialogue stack for handling sub-dialogues.

But the strict structure of a stack is limiting. When a topic is popped from the stack, it is no longer available to provide context. Consider:

BOT: Your total is $15.50 — shall I charge the card you used last time? USER: Do I still have credit from that refund? BOT: Yes, your account is $10 in credit. USER: Ok, great. BOT: Shall I place the order? USER: Yes. BOT: Done. USER: So that used up my credit, right?

The last question refers to the refund credits topic. If that topic was popped from the stack, the system cannot use it to interpret what the user is asking about. Since humans freely revisit and interleave topics with no structural constraint, a stack is too rigid.

The Dialogue Transformer architecture argues for using transformer self-attention as a more flexible alternative. Rather than explicit topic management with push/pop operations, the attention mechanism can attend to any previous turn in the conversation regardless of structural position. This naturally supports topic revisitation without the context loss that stacks impose.

This connects to the multi-turn conversation failure mode. Since Why do language models fail in gradually revealed conversations?, one mechanism of getting lost is losing access to earlier conversation context when topics shift and return. The stack metaphor makes this loss explicit and structural; transformer attention should prevent it in principle, though in practice attention patterns may still favor recent context.


Source: Conversation Architecture Structure

Related concepts in this collection

Concept map
13 direct connections · 87 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

dialogue topic management requires flexible revisitation not rigid stack structures — popped topics lose context even when users return to them