How does AI context differ from conventional software context?
Explores whether the ephemeral, session-by-session nature of AI context requires fundamentally different design approaches than the stable interfaces users internalize in traditional software.
A spreadsheet's context is its rows, columns, formulas, and toolbar. A user learns this context once and operates within it for years. The context is fixed across sessions, identical across users, persistent across uses. Software UX practice evolved within this assumption: design a stable context users can internalize, then design interactions within that context. Information architecture, navigation, mental models — all presuppose a fixed substrate.
AI changes this substrate. The context of an AI interaction is what is in the model's working window at the moment of generation: prompt, system instructions, retrieved documents, conversation history, persistent memory if any, tool outputs. Each of these can change between turns. The context for turn N is not the context for turn N+1. The user cannot internalize the context the way they internalize a UI, because the context is being constructed and reconstructed in real time, often invisibly.
This has three design consequences. First, mental models built on stable substrate fail. Users who expect "the AI" to remember things consistently are operating with a software-era assumption that does not hold. Second, the unit of design shifts from "the interface" to "the context as it evolves" — context engineering becomes the design substrate, not navigation or layout. Third, the design surface includes things users cannot see (system prompts, retrieved chunks, hidden state) — making the context legible to users becomes a design problem of its own.
Context-engineering tools are emerging as the practitioner response: prompt structure, memory management, retrieval orchestration, tool integration. These are not extensions of UI; they are a different design discipline whose object is the model's evolving working window rather than the user's screen. The discipline has no analog in conventional UX, which means existing UX competencies do not transpose without translation. Designers entering AI work need to learn what they are designing in addition to learning new patterns.
The strongest counterargument: a sufficiently good agent will hide the context and present the user a stable interface. Possible at the margin, but stability requires either constraining the AI's capability (defeating its flexibility) or solving every memory and consistency problem that has so far resisted solution. The mutable context is not a temporary state of the technology; it is a structural property of generative interaction.
Source: AI Design Topics
Related concepts in this collection
-
Is the LLM a tool or a new form of intelligence itself?
Does framing AI as merely delivering pre-existing intelligence miss what's actually happening? This explores whether the model itself constitutes a fundamentally new intelligence-medium with distinct cultural effects.
the medium-theoretic claim that context-as-substrate follows from
-
Why does AI output change with every prompt and context?
Explores whether the variability of AI-generated intelligence across contexts and audiences is a fundamental feature or a flaw to be fixed. Examines what this mutability means for how we should evaluate and understand AI systems.
companion mutability claim at the output level
-
Why don't conversational AI systems mirror their users' word choices?
Explores whether current dialogue models exhibit lexical entrainment—the human tendency to align vocabulary with conversation partners—and what's needed to bridge this gap in AI communication.
a specific consequence of mutable context for dialogue
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
context in AI is mutable dynamic and ephemeral unlike the fixed stable context conventional software provides