Psychology and Social Cognition Conversational AI Systems

Why do improvements in AI conversation not increase user satisfaction?

If conversational AI gets better, shouldn't users be happier? This explores why gains in fidelity paradoxically raise expectations faster than satisfaction, keeping the satisfaction gap constant.

Note · 2026-04-14
Why do AI agents fail to take initiative? Why does conversational AI feel therapeutic when its mechanics aren't?

Conventional software-quality dynamics expect that better software produces more satisfied users. Bugs decrease, features improve, performance rises, and satisfaction tracks the improvements. Conversational AI breaks this pattern. The better the AI conversation gets, the higher user expectations rise of the next conversation, and complaint volume tracks the gap between expectation and reality rather than the absolute quality of the interaction.

The mechanism is specific to conversational interaction. Conversation activates a folk model of "talking with someone," and that folk model has rich expectations about what a competent interlocutor should do — remember prior turns, anticipate where the conversation is going, recognize subtext, hold position across topic shifts, register emotional tone. AI conversation that is good enough to activate the folk model triggers all these expectations at once. Improvements in any of them (better memory, better topic tracking) do not satisfy the folk model; they raise its expectations of the others.

This produces a paradox of fidelity. AI conversational quality must approach the folk-model threshold to be useful, because below the threshold users disengage. But once it crosses the threshold, every further improvement raises the bar for what counts as competent participation. The better the model gets, the larger the perceived gap between what it does and what a human interlocutor would do. Improvement does not converge on satisfaction.

Two design consequences follow. First, optimizing for measured conversational quality does not optimize for user satisfaction beyond the folk-model threshold. Different metrics are needed — perhaps expectation-management metrics rather than fidelity metrics. Second, conversational design may need to deliberately suppress fidelity in some dimensions to keep the folk model from activating. An AI that obviously is not trying to seem human may produce more satisfied users than one that almost seems human but fails in detectable ways.

This is adjacent to but distinct from Does user satisfaction actually measure cognitive understanding? — that is about misalignment between satisfaction and quality at a single point in time; this is about expectations and quality moving together such that satisfaction stays constant or declines as quality rises.

The strongest counterargument: this is just normal hedonic adaptation. The reply is that hedonic adaptation describes general satisfaction regression to a baseline, while the fidelity paradox is specific — folk-model activation produces a specific cluster of expectations that do not regress to baseline because they are scaffolded by the conversational form itself.


Source: AI Design Topics

Related concepts in this collection

Concept map
16 direct connections · 141 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

the better AI conversation gets the more user expectations rise — the fidelity paradox of conversational design