How should users control systems with unpredictable outputs?
When generative AI produces different outputs from identical inputs, how do interaction design principles help users maintain control and develop effective mental models for stochastic systems?
Generative AI introduces what Nielsen calls "intent-based outcome specification" — users specify what they want, often in natural language, but not how it should be produced. The distinguishing characteristic: the system generates artifacts as outputs, and those outputs may vary in character or quality even when the user's input does not change. Weisz et al. describe this as "generative variability."
This creates an "algorithmic experience" (Alvarado & Waern) that raises fundamental design questions:
- How should users control a system whose outputs they can't predict?
- What counts as a "mistake" when the same input produces different results?
- Does it violate consistency heuristics when replicable results are difficult to achieve?
- How do users develop effective mental models for a stochastic system?
Six design principles address the challenge:
- Design Responsibly — new or amplified ethical issues from generative nature
- Design for Mental Models — users need to understand what the system can and cannot do
- Design for Appropriate Trust & Reliance — calibrated trust despite variability
- Design for Generative Variability — the distinguishing principle; embrace variation as feature
- Design for Co-Creation — users and AI as collaborative partners
- Design for Imperfection — outputs will be imperfect; design for refinement not perfection
These principles serve two distinct user goals: (1) optimization — producing output satisfying task-specific criteria, and (2) exploration — using the generative process to discover possibilities, seek inspiration, support ideation. The same system needs to support both modes.
Users must develop new skills to work WITH generative variability rather than against it — including prompt engineering, which is "typically informal and relies on trial-and-error." Since Why can't users articulate what they want from AI?, generative variability compounds the design challenge: users must envision both their intent AND how the stochastic system might interpret it.
Existing human-AI design guidelines fail for generative AI specifically because they don't cover generative use cases, don't address generative variability, and don't address amplified ethical issues from the generative nature.
Source: Design Frameworks
Related concepts in this collection
-
Why can't users articulate what they want from AI?
Explores the cognitive gap between imagining possibilities and expressing them as prompts. Why language interfaces create a harder envisioning task than traditional UI affordances.
generative variability compounds the envisioning challenge
-
Can prompt optimization teach models knowledge they lack?
Explores whether sophisticated prompting techniques can inject new domain knowledge into language models, or if they're limited to activating existing training knowledge.
prompt engineering as user skill has hard limits
-
Can we detect when language models confabulate?
Current uncertainty metrics fail to catch inconsistent outputs that look confident. Could measuring semantic divergence across samples reveal confabulation signals that token-level metrics miss?
variability at the semantic level is measurable
-
Why do AI agents misalign with what users actually want?
UserBench explores how often AI models fully understand user intent across multi-turn interactions. The study reveals that human communication is underspecified, incremental, and indirect — traits that challenge current models to actively clarify goals.
the intent-specification problem quantified
-
Why can't advanced AI models take initiative in conversation?
Despite extraordinary capability in answering and reasoning, LLMs fundamentally cannot initiate, redirect, or guide exchanges. Understanding this gap—and whether it's fixable—matters for building AI that truly collaborates rather than merely responds.
generative variability intensifies the passivity problem: when outputs vary unpredictably, users need more guidance in navigating the output space, but passive models that only respond cannot help users develop the intent refinement strategies that variability demands
-
Do generated interfaces outperform text-based chat for most tasks?
Explores whether LLMs should create interactive UIs instead of text responses, and under what conditions users prefer dynamic interfaces to traditional conversational chat.
generative interfaces are a structural response to generative variability: by creating task-specific UIs rather than text blocks, they reduce the cognitive burden of navigating variable outputs
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
generative variability creates a new interaction paradigm where users specify intent not method and outputs vary unpredictably