Psychology and Social Cognition Design & LLM Interaction Conversational AI Systems

Why do users fail with AI interfaces designed like conversations?

Explores whether AI interface design that mimics human conversation misleads users into deploying communication skills that don't match how AI actually works, creating predictable failures.

Note · 2026-04-14
Why do AI agents fail to take initiative? What do language models actually know?

Design practice has a clear concept of user competencies: the skills and habits users bring to an interaction, accumulated from prior interactions of similar kind. Good design accommodates these competencies — does not force the user to learn unfamiliar patterns, leverages habits the user already has, anticipates what the user will assume.

For AI interfaces, the user's relevant competencies are language competencies — and the user's language competencies were all built through communicative use. The user has spent a lifetime addressing other people, anticipating their understanding, calibrating language to relational stakes. These are competencies in a communicative operation, not in a string-production operation. When the user encounters an AI interface, the interface presents itself as language — chat, conversation, dialogue — and the user deploys their communicative competencies because that is what they have.

The design implicitly assumes the user's competencies will work. They will not — at least not in the way a competent human interlocutor would meet them. The user addresses the AI with anticipation, with relational calibration, with audience-modeling. The AI generates a response that has the surface form of an addressed reply but is not actually one. The user's competencies, deployed appropriately, return outcomes calibrated to a non-existent addressing partner. The interaction breaks in ways the user cannot easily diagnose, because the breakage is at the level the user does not know they are operating on.

This is the design version of Are language models and human speakers doing the same thing?. The ML community's failure to distinguish the operations becomes a design failure when the interface invites communicative competencies into a non-communicative system. The user is not at fault; the design imported a frame that does not apply. The "user error" is the design error one level up.

The diagnostic implication for AI design is that interfaces should either (a) make the non-communicative nature of the system visible enough that user competencies do not auto-deploy, or (b) actually deliver the communicative behaviors user competencies are calibrated to. Most current designs do neither — they use communicative interface conventions (chat windows, message bubbles, conversational tone) without delivering communicative behavior, which is the worst combination from a competency-fit perspective.

The strongest counterargument: users adapt; they will learn what AI is and adjust their competencies accordingly. Possible at the limit, but adaptation is slow, partial, and never reaches full re-calibration. In the meantime, design has to handle the mismatch, not wait for users to dissolve it.


Source: Communication vs Language

Related concepts in this collection

Concept map
15 direct connections · 117 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

user competencies in language come from communicative use — design that ignores this misframes the user