Why do users drift away from their original information need?
When users know their knowledge is incomplete but cannot articulate what's missing, do they unintentionally shift topics? And can real-time systems detect this drift?
Information science identified a specific cognitive condition decades before conversational AI made it a practical design problem. Belkin & Vickery (1985) named it the "anomalous state of knowledge" (ASK): users who know their knowledge is incomplete but cannot articulate what is missing. They know they need something but cannot specify what.
This matters because it produces a specific observable behavior: unintentional topic drift. Users in an ASK state begin pursuing one information need, then gradually deviate into sub-topics without their own awareness. They don't decide to change topic — they drift, because each intermediate result partially addresses their need while also exposing adjacent gaps, pulling their attention sideways.
The Topic Shift Detection paper demonstrates that this drift is detectable. Their model predicts with 84% precision which utterances belong to the major topic versus those deviating from it — without a predetermined topic set. This is significant because open-domain systems cannot predefine all possible topics. The detection must work from conversational dynamics alone.
This complements the gulf of envisioning from the USER side. Since How do users actually form intent when prompting AI systems?, the gulf describes the intent formation challenge. ASK describes a specific upstream cause: the user's knowledge state is anomalous in a way that prevents intent articulation. And it predicts a specific downstream effect: topic drift.
The two phenomena create a feedback loop: anomalous knowledge → vague query → partial results → exposed new gaps → drift into sub-topic → further from original need → more anomalous knowledge. Without active intervention, the user spirals away from their actual information need.
This also connects to the AI-side problem. Since Why do language models engage with conversational distractors?, the drift is bilateral: the user drifts because of ASK, and the AI follows the drift because it lacks topic-following discipline. Neither party maintains the thread. The paper argues for "context-dependent user guidance without presupposing a strict hierarchy of plans and task goals" — guidance that adapts to where the user IS rather than where a predetermined dialogue tree expects them to be.
Source: Question Answer Search
Related concepts in this collection
-
How do users actually form intent when prompting AI systems?
Users face a 'gulf of envisioning'—they must simultaneously imagine possibilities and express them to language models. This cognitive gap creates breakdowns not from AI incapability but from users struggling to articulate what they truly need.
ASK is the upstream cognitive cause; gulf of envisioning is the interaction-level consequence
-
Why do language models engage with conversational distractors?
Explores why state-of-the-art LLMs struggle to maintain topical focus when users introduce off-topic turns, despite having explicit scope instructions. This gap suggests models lack training signals for ignoring irrelevant directions.
bilateral drift: user drifts (ASK) and AI follows (topic-following gap)
-
Why do AI assistants get worse at longer conversations?
Explores why LLM performance drops 25 points when instructions span multiple turns instead of one message, and whether models can recover from early wrong assumptions.
ASK-driven drift is an unintentional wrong turn that neither party notices
-
Why do language models fail in gradually revealed conversations?
Explores why LLMs perform 39% worse when instructions arrive incrementally rather than upfront, and whether they can recover from early mistakes in multi-turn dialogue.
ASK is the user-side cause of the underspecification that triggers premature assumptions
-
Does user satisfaction actually measure cognitive understanding?
Users may report satisfaction while remaining internally confused about their needs. This explores whether traditional satisfaction metrics capture genuine clarity or merely social politeness.
users in ASK state will express satisfaction with partial answers that don't resolve their confusion — satisfaction scores won't detect ASK-driven drift
-
Can models learn to abstain when uncertain about predictions?
Explores whether language models can be trained to recognize when they lack sufficient information to forecast conversation outcomes, rather than forcing uncertain predictions into confident-sounding responses.
ASK-driven topic drift (detectable at 84% precision) is a concrete forecasting target: calibrated models could predict when users are entering anomalous knowledge states and intervene before drift compounds
-
Why do dialogue systems lose context when topics return?
Stack-based dialogue management removes topics after they're resolved, making it hard for systems to reference them later. Does this structural rigidity explain why conversational AI struggles with topic revisitation?
ASK-driven drift creates the unintentional topic switches that flexible topic management must accommodate; rigid stack structures lose context when ASK causes users to drift, requiring attention-based revisitation
-
Does including all conversation history actually help retrieval?
Conversational search systems typically use all previous context to understand current queries. But do topic switches in multi-turn conversations inject noise that degrades performance rather than helps it?
ASK-driven drift injects the irrelevant context that selective history must filter: users drifting into sub-topics create the very topic-switch noise that entity-based selection mechanisms detect
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
anomalous state of knowledge is a distinct cognitive condition where users cannot articulate incomplete knowledge leading to unintentional topic drift detectable in real-time