Psychology and Social Cognition

Who bears responsibility when AI seems human-like?

Does human-likeness in AI come from how users perceive systems or how designers build them? Understanding this distinction clarifies where accountability lies when AI causes harm.

Note · 2026-04-18 · sourced from Human Centered Design
Why do AI agents fail to take initiative?

When an AI system appears human-like, the critical question is who is responsible for the human-likeness: the user who perceives it, or the designer who built it? This distinction, drawn from Shevlin (2025), separates two mechanisms that are routinely conflated:

Anthropomorphism — the user perceives human-like qualities in the system. The responsible party is the perceiver. This is a cognitive tendency: humans attribute beliefs, desires, and emotions to non-human entities. The human-likeness is in the eye of the beholder.

Anthropomimesis — the designer builds human-like features into the system. The responsible party is the developer. This is a design decision: features that mimic human appearance, behavior, or biological structure are deliberately or inadvertently engineered into the artifact.

Anthropomimesis operates on three dimensions:

Shevlin further distinguishes weak anthropomimesis (surface-level features like voice and interface — e.g., ELIZA) from robust anthropomimesis (deep structural mimicry of human cognitive or biological processes).

The accountability implication is direct: when a human-like AI causes harm, the locus of responsibility depends on whether the human-likeness was designed in (anthropomimesis — designer accountable) or perceived by the user (anthropomorphism — design may be neutral, user response is the mechanism). In practice, both often co-occur — a human-like design elicits perception of human-likeness — but distinguishing which mechanism is operating determines where intervention should target: redesigning the system vs. educating the user.

Since Why do people trust AI outputs they shouldn't?, the anthropomorphism/anthropomimesis distinction clarifies Rose-Frame's Trap 2 (mistaking intuition for reason): when anthropomimetic design features (conversational voice, empathetic phrasing) trigger anthropomorphic perception, they activate System 1 trust responses that bypass reflective evaluation. The cognitive trap is partly designed in, not purely a user failure.


Source: Human Centered Design Paper: Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction

Related concepts in this collection

Concept map
14 direct connections · 89 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

anthropomorphism and anthropomimesis assign responsibility for human-likeness to different parties — user perception versus designer intention create distinct accountability structures