Language Understanding and Pragmatics Conversational AI Systems

How do readers track segments, purposes, and salience together?

Can discourse processing actually happen in parallel rather than sequentially? This matters because understanding how readers coordinate multiple layers of meaning at once reveals where AI systems break down in comprehension.

Note · 2026-02-21 · sourced from Discourses
Where exactly does language competence break down in LLMs? How should researchers navigate LLM reasoning research?

Discourse processing, according to Grosz & Sidner, requires three recognition tasks happening in parallel:

  1. How utterances aggregate into linguistic segments
  2. What intentions are expressed in each segment and how those intentions relate to each other
  3. What is currently salient (objects, properties, relations) as the discourse unfolds

The important point is that these tasks are not sequential. You cannot recognize segments first, then extract intentions, then update salience — they constrain each other during processing. An intention shift often marks a segment boundary; a reference resolves only against the current attentional state.

This creates a structural challenge for architectures that process language linearly. Even if each component is handled well in isolation, their coordination across a long context is what breaks down. When LLMs fail at tasks like understanding interrupted dialogues or resolving pronouns across far-apart segments, the failure is specifically in the joint tracking of all three layers.

The cleaner framing for AI evaluation: testing discourse understanding should test all three layers together, not in isolation. A model that passes coreference tests (attentional) may still fail at detecting intentional structure shifts, and vice versa.

Failure mode taxonomy via DEAM: The DEAM framework operationalizes discourse coherence failure through AMR (Abstract Meaning Representation) manipulation, identifying four distinct semantic-level failure modes: contradiction (conflicting propositions), coreference inconsistency (entity reference failures), irrelevancy (off-topic contributions), and decreased engagement (disengagement patterns). Since What semantic failures break dialogue coherence most realistically?, each failure mode maps to a specific breakdown in the Grosz & Sidner layers — contradiction and coreference affect the attentional state, irrelevancy disrupts intentional structure, and decreased engagement signals segment-level disengagement.

Operationalization via Conversational DNA: The Conversational DNA project provides a concrete visualization method for tracking this multi-dimensional coherence. Linguistic complexity (sentence length, syntactic depth, vocabulary diversity), emotional valence (VADER + RoBERTa), topic coherence (LDA with sliding window), and conversational relevance (semantic similarity + discourse markers + pronoun resolution) are processed as simultaneous parallel streams. This moves the Grosz & Sidner framework from theoretical claim to operational tool — emergent patterns in the interaction between these temporal streams reveal conversational dynamics invisible to traditional statistical analysis (Can tracking dialogue dimensions simultaneously reveal hidden conversation patterns?).


Source: Discourses, Conversation Agents, Conversation Architecture Structure

Related concepts in this collection

Concept map
17 direct connections · 131 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

discourse coherence requires simultaneously tracking segments, purposes, and salient objects