Why do users fail with AI interfaces designed like conversations?
Explores whether AI interface design that mimics human conversation misleads users into deploying communication skills that don't match how AI actually works, creating predictable failures.
Design practice has a clear concept of user competencies: the skills and habits users bring to an interaction, accumulated from prior interactions of similar kind. Good design accommodates these competencies — does not force the user to learn unfamiliar patterns, leverages habits the user already has, anticipates what the user will assume.
For AI interfaces, the user's relevant competencies are language competencies — and the user's language competencies were all built through communicative use. The user has spent a lifetime addressing other people, anticipating their understanding, calibrating language to relational stakes. These are competencies in a communicative operation, not in a string-production operation. When the user encounters an AI interface, the interface presents itself as language — chat, conversation, dialogue — and the user deploys their communicative competencies because that is what they have.
The design implicitly assumes the user's competencies will work. They will not — at least not in the way a competent human interlocutor would meet them. The user addresses the AI with anticipation, with relational calibration, with audience-modeling. The AI generates a response that has the surface form of an addressed reply but is not actually one. The user's competencies, deployed appropriately, return outcomes calibrated to a non-existent addressing partner. The interaction breaks in ways the user cannot easily diagnose, because the breakage is at the level the user does not know they are operating on.
This is the design version of Are language models and human speakers doing the same thing?. The ML community's failure to distinguish the operations becomes a design failure when the interface invites communicative competencies into a non-communicative system. The user is not at fault; the design imported a frame that does not apply. The "user error" is the design error one level up.
The diagnostic implication for AI design is that interfaces should either (a) make the non-communicative nature of the system visible enough that user competencies do not auto-deploy, or (b) actually deliver the communicative behaviors user competencies are calibrated to. Most current designs do neither — they use communicative interface conventions (chat windows, message bubbles, conversational tone) without delivering communicative behavior, which is the worst combination from a competency-fit perspective.
The strongest counterargument: users adapt; they will learn what AI is and adjust their competencies accordingly. Possible at the limit, but adaptation is slow, partial, and never reaches full re-calibration. In the meantime, design has to handle the mismatch, not wait for users to dissolve it.
Source: Communication vs Language
Related concepts in this collection
-
Are language models and human speakers doing the same thing?
Does treating LLM output and human communication as equivalent operations mask fundamental differences in how they work? This distinction shapes how we assess AI capabilities and risks.
the operational claim this is the design-implication of
-
Why do improvements in AI conversation not increase user satisfaction?
If conversational AI gets better, shouldn't users be happier? This explores why gains in fidelity paradoxically raise expectations faster than satisfaction, keeping the satisfaction gap constant.
companion design paradox produced by the same competency mismatch
-
How does AI context differ from conventional software context?
Explores whether the ephemeral, session-by-session nature of AI context requires fundamentally different design approaches than the stable interfaces users internalize in traditional software.
adjacent design challenge that compounds with this one
-
Do AI-assisted outputs fool users about their own skills?
When people use AI tools to produce high-quality work, do they mistakenly believe they personally possess the skills that generated it? This matters because such misattribution could mask genuine skill loss and prevent corrective action.
the competency mismatch produces the LLM Fallacy: users deploy communicative competencies, get communicative-looking output back, and attribute the output quality to the competencies they deployed
-
How do AI tools trick users into overestimating their own skills?
When people use language models to help with work, what system-level properties create false confidence in their own competence? Understanding this matters for recognizing hidden skill gaps.
attribution ambiguity is amplified when the interface invites communicative competencies: the user's genuine communicative skill mixes with the system's generation, making the boundary especially hard to trace
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
user competencies in language come from communicative use — design that ignores this misframes the user