How do users actually form intent when prompting AI systems?
Users face a 'gulf of envisioning'—they must simultaneously imagine possibilities and express them to language models. This cognitive gap creates breakdowns not from AI incapability but from users struggling to articulate what they truly need.
The STORM framework names a fundamental gap in human-AI interaction: the "gulf of envisioning." Unlike conventional interfaces with predictable affordances, language models require users to simultaneously envision possibilities AND express them. This cognitive difficulty produces communication breakdowns — not because the AI is incapable, but because the user cannot articulate a prompt that captures what they actually need.
The deeper formalization is that human intent formation involves progressive constraint resolution with fluctuating stability intervals and distinct structural signaling patterns. Intent is not binary (present or absent, clear or ambiguous). It MATURES through interaction — starting vague, acquiring constraints, stabilizing, sometimes destabilizing when new information arrives, then reconsolidating. Current evaluation methods fail because they: (1) treat intent as binary, (2) lack frameworks for temporal coherence, and (3) overlook structural signals within expressions.
STORM models this through asymmetric information dynamics: UserLLM has full access to internal states (preferences, emotions, background) while AgentLLM has only observable dialogue history. This asymmetry mirrors real human-AI interaction — the AI cannot access the user's unstated context, unresolved preferences, or evolving understanding of their own needs.
The novel Clarify metric measures whether agent responses genuinely improve users' understanding of their own needs — assessed through analysis of simulated user inner thoughts rather than external expressions. This captures an invisible cognitive process: a user may SAY "thanks, that's helpful" while internally remaining confused about what they actually want.
Since Why do language models fail in gradually revealed conversations?, the STORM framing reframes this not as pure AI failure but as a joint user-AI failure. The user's expressions contain structural signals — stylistic choices, implicit assumptions, cultural markers — that reflect what Wittgenstein called contextual embeddedness within "forms of life." Current systems cannot access these embedded cues.
The practical implications: satisfaction derived from inner thoughts (internal contentment), clarification effectiveness (Clarify metric), and Satisfaction-Seeking Actions (SSA — composite of both) provide three complementary evaluation dimensions that together capture what single-metric evaluation misses.
The original gulf of envisioning paper (Zamfirescu-Pereira et al., 2023) defines three specific misalignment gaps: (1) the capability gap — not knowing what the task should be (what can the LLM even do?); (2) the instruction gap — not knowing how to best instruct the LLM about goals (prompt engineering difficulty); (3) the intentionality gap — not knowing what to expect for the LLM's output in meeting the goal. The paper notes that traditional HCI inadvertently bypassed intention formation because conventional interfaces have fixed command vocabularies — clicking "Bold" doesn't require envisioning what boldness means. LLM interfaces require envisioning at every step. The iterative process resembles a "20-questions" or "Hot or Cold" guessing game that may be inefficient for longer output and lead to local minima within the solution space. Further, humans show fixation on initial examples that interfere with exploring alternative solutions.
UserBench (2025) quantifies the downstream consequences: models provide answers that fully align with ALL user intents only 20% of the time, and even the best models uncover fewer than 30% of user preferences through active interaction. The three core traits of user communication — underspecification, incrementality, indirectness — are not edge cases but the default condition.
A concrete domain-specific validation comes from LP (linear programming) dialogue research: individuals without specialized mathematical backgrounds "often struggle to formulate the appropriate linear models for their specific problem instances." The proposed solution — a two-agent synthetic dialogue system where one agent simulates the conversational assistant and the other emulates the user — is specifically designed to elicit information the user possesses but cannot organize into a formal structure. This is a clean instance of the gulf of envisioning: the user has the problem knowledge (constraints, objectives) but literally cannot state it as a model without conversational assistance. Mathematical problem formulation thus serves as a particularly transparent example of intent maturation — the user's "intent" (to solve their LP problem) is real but unformulable without guided dialogue.
Source: Conversation Architecture Structure, Design Frameworks
Related concepts in this collection
-
Why do language models fail in gradually revealed conversations?
Explores why LLMs perform 39% worse when instructions arrive incrementally rather than upfront, and whether they can recover from early mistakes in multi-turn dialogue.
STORM reframes premature assumptions as failures to track intent maturation
-
Why can't advanced AI models take initiative in conversation?
Despite extraordinary capability in answering and reasoning, LLMs fundamentally cannot initiate, redirect, or guide exchanges. Understanding this gap—and whether it's fixable—matters for building AI that truly collaborates rather than merely responds.
the gulf of envisioning is the user-side complement to the AI-side passivity problem
-
Which clarifying questions actually improve user satisfaction?
Not all clarification helps equally. This explores whether asking users to rephrase their needs works as well as asking targeted questions about specific information gaps.
clarification is the bridge across the gulf of envisioning
-
Why do language models lose performance in longer conversations?
Does multi-turn degradation stem from fundamental model limitations, or from misalignment between what users mean and what models assume? Understanding the root cause could guide better solutions.
intent alignment gap connects directly to intent maturation
-
Why do AI agents misalign with what users actually want?
UserBench explores how often AI models fully understand user intent across multi-turn interactions. The study reveals that human communication is underspecified, incremental, and indirect — traits that challenge current models to actively clarify goals.
quantifies the gulf of envisioning's consequences
-
Can models identify what information they actually need?
When a reasoning task is missing a key piece of information, can language models recognize what's absent and ask the right clarifying question? QuestBench tests this capability directly.
QuestBench reveals models cannot even identify what information is missing (40-50% accuracy), so they cannot help users mature underspecified intent
-
Why do reasoning models overthink ill-posed questions?
Explores why models trained for extended reasoning produce drastically longer, less useful responses to unanswerable questions—and whether this represents a fixable training deficit or inherent limitation.
when users provide incomplete intent, reasoning models overthink rather than recognizing the gap and asking for clarification
-
Why do users drift away from their original information need?
When users know their knowledge is incomplete but cannot articulate what's missing, do they unintentionally shift topics? And can real-time systems detect this drift?
ASK is the upstream cognitive cause of the gulf: the user's knowledge state is anomalous in a way that prevents intent articulation, producing the topic drift that the gulf predicts
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
intent formation is a continuous maturation process not a binary state — the gulf of envisioning means users cannot formulate what they want while AI cannot help them evolve their intent