Who bears responsibility when AI seems human-like?
Does human-likeness in AI come from how users perceive systems or how designers build them? Understanding this distinction clarifies where accountability lies when AI causes harm.
When an AI system appears human-like, the critical question is who is responsible for the human-likeness: the user who perceives it, or the designer who built it? This distinction, drawn from Shevlin (2025), separates two mechanisms that are routinely conflated:
Anthropomorphism — the user perceives human-like qualities in the system. The responsible party is the perceiver. This is a cognitive tendency: humans attribute beliefs, desires, and emotions to non-human entities. The human-likeness is in the eye of the beholder.
Anthropomimesis — the designer builds human-like features into the system. The responsible party is the developer. This is a design decision: features that mimic human appearance, behavior, or biological structure are deliberately or inadvertently engineered into the artifact.
Anthropomimesis operates on three dimensions:
- Aesthetic — physically observable qualities (form, appearance, embodiment)
- Behavioral — robot/AI behaviors that mimic human social and affective behaviors
- Substantive — mimicking biological structures (joints, muscle-like actuators)
Shevlin further distinguishes weak anthropomimesis (surface-level features like voice and interface — e.g., ELIZA) from robust anthropomimesis (deep structural mimicry of human cognitive or biological processes).
The accountability implication is direct: when a human-like AI causes harm, the locus of responsibility depends on whether the human-likeness was designed in (anthropomimesis — designer accountable) or perceived by the user (anthropomorphism — design may be neutral, user response is the mechanism). In practice, both often co-occur — a human-like design elicits perception of human-likeness — but distinguishing which mechanism is operating determines where intervention should target: redesigning the system vs. educating the user.
Since Why do people trust AI outputs they shouldn't?, the anthropomorphism/anthropomimesis distinction clarifies Rose-Frame's Trap 2 (mistaking intuition for reason): when anthropomimetic design features (conversational voice, empathetic phrasing) trigger anthropomorphic perception, they activate System 1 trust responses that bypass reflective evaluation. The cognitive trap is partly designed in, not purely a user failure.
Source: Human Centered Design Paper: Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction
Related concepts in this collection
-
Why do people trust AI outputs they shouldn't?
When do human cognitive shortcuts fail in AI interaction? Three compounding traps—treating statistical patterns as facts, mistaking fluency for understanding, and avoiding disagreement—may explain systematic overreliance across languages and contexts.
anthropomimetic design features trigger anthropomorphic perception, activating System 1 cognitive traps
-
Does revealing AI identity help or hurt user trust?
Explores whether transparency about AI partners in interactions creates bias or enables better judgment. Matters because disclosure policies affect both user experience and fair evaluation of AI systems.
disclosure effects may differ depending on whether the AI has anthropomimetic features: weak mimesis may produce weaker Eliza effects
-
How do people accidentally develop romantic bonds with AI?
Exploring whether AI companionship emerges from deliberate romantic seeking or accidentally through functional use, and whether users adopt human relationship rituals like wedding rings and couple photos.
unintentional companionship is anthropomorphism (user-side); the question is whether anthropomimetic features accelerate it
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
anthropomorphism and anthropomimesis assign responsibility for human-likeness to different parties — user perception versus designer intention create distinct accountability structures