Psychology and Social Cognition Language Understanding and Pragmatics Conversational AI Systems

How do users actually form intent when prompting AI systems?

Users face a 'gulf of envisioning'—they must simultaneously imagine possibilities and express them to language models. This cognitive gap creates breakdowns not from AI incapability but from users struggling to articulate what they truly need.

Note · 2026-02-22 · sourced from Conversation Architecture Structure
Why do AI conversations reliably break down after multiple turns? What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The STORM framework names a fundamental gap in human-AI interaction: the "gulf of envisioning." Unlike conventional interfaces with predictable affordances, language models require users to simultaneously envision possibilities AND express them. This cognitive difficulty produces communication breakdowns — not because the AI is incapable, but because the user cannot articulate a prompt that captures what they actually need.

The deeper formalization is that human intent formation involves progressive constraint resolution with fluctuating stability intervals and distinct structural signaling patterns. Intent is not binary (present or absent, clear or ambiguous). It MATURES through interaction — starting vague, acquiring constraints, stabilizing, sometimes destabilizing when new information arrives, then reconsolidating. Current evaluation methods fail because they: (1) treat intent as binary, (2) lack frameworks for temporal coherence, and (3) overlook structural signals within expressions.

STORM models this through asymmetric information dynamics: UserLLM has full access to internal states (preferences, emotions, background) while AgentLLM has only observable dialogue history. This asymmetry mirrors real human-AI interaction — the AI cannot access the user's unstated context, unresolved preferences, or evolving understanding of their own needs.

The novel Clarify metric measures whether agent responses genuinely improve users' understanding of their own needs — assessed through analysis of simulated user inner thoughts rather than external expressions. This captures an invisible cognitive process: a user may SAY "thanks, that's helpful" while internally remaining confused about what they actually want.

Since Why do language models fail in gradually revealed conversations?, the STORM framing reframes this not as pure AI failure but as a joint user-AI failure. The user's expressions contain structural signals — stylistic choices, implicit assumptions, cultural markers — that reflect what Wittgenstein called contextual embeddedness within "forms of life." Current systems cannot access these embedded cues.

The practical implications: satisfaction derived from inner thoughts (internal contentment), clarification effectiveness (Clarify metric), and Satisfaction-Seeking Actions (SSA — composite of both) provide three complementary evaluation dimensions that together capture what single-metric evaluation misses.

The original gulf of envisioning paper (Zamfirescu-Pereira et al., 2023) defines three specific misalignment gaps: (1) the capability gap — not knowing what the task should be (what can the LLM even do?); (2) the instruction gap — not knowing how to best instruct the LLM about goals (prompt engineering difficulty); (3) the intentionality gap — not knowing what to expect for the LLM's output in meeting the goal. The paper notes that traditional HCI inadvertently bypassed intention formation because conventional interfaces have fixed command vocabularies — clicking "Bold" doesn't require envisioning what boldness means. LLM interfaces require envisioning at every step. The iterative process resembles a "20-questions" or "Hot or Cold" guessing game that may be inefficient for longer output and lead to local minima within the solution space. Further, humans show fixation on initial examples that interfere with exploring alternative solutions.

UserBench (2025) quantifies the downstream consequences: models provide answers that fully align with ALL user intents only 20% of the time, and even the best models uncover fewer than 30% of user preferences through active interaction. The three core traits of user communication — underspecification, incrementality, indirectness — are not edge cases but the default condition.

A concrete domain-specific validation comes from LP (linear programming) dialogue research: individuals without specialized mathematical backgrounds "often struggle to formulate the appropriate linear models for their specific problem instances." The proposed solution — a two-agent synthetic dialogue system where one agent simulates the conversational assistant and the other emulates the user — is specifically designed to elicit information the user possesses but cannot organize into a formal structure. This is a clean instance of the gulf of envisioning: the user has the problem knowledge (constraints, objectives) but literally cannot state it as a model without conversational assistance. Mathematical problem formulation thus serves as a particularly transparent example of intent maturation — the user's "intent" (to solve their LP problem) is real but unformulable without guided dialogue.


Source: Conversation Architecture Structure, Design Frameworks

Related concepts in this collection

Concept map
15 direct connections · 128 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

intent formation is a continuous maturation process not a binary state — the gulf of envisioning means users cannot formulate what they want while AI cannot help them evolve their intent