Disambiguating Anthropomorphism and Anthropomimesis in Human-Robot Interaction

Paper · arXiv 2602.09287 · Published February 10, 2026
Human Centered DesignEmotionsPsychology UsersDesign Frameworks

Henry Shevlin

Emotions Psychology Users Design Frameworks

In this preliminary work, we offer an initial disambiguation of the theoretical concepts anthropomorphism and anthropomimesis in Human-Robot Interaction (HRI) and social robotics. We define anthropomorphism as users perceiving human-like qualities in robots, and anthropomimesis as robot developers designing human-like features into robots. This contribution aims to provide a clarification and exploration of these concepts for future HRI scholarship, particularly regarding the party responsible for human-like qualities—robot perceiver for anthropomorphism, and robot designer for anthropomimesis. We provide this contribution so that researchers can build on these disambiguated theoretical concepts for future robot design and evaluation.

Theoretical

Concept
Responsible

Party
Mechanism Theoretical Background
Anthropomorphism Robot

perceiver /

user
Users perceive human-like

qualities in robots
Anthropomorhispm describes “people attribut[ing] human

characteristics to objects” [Fink, 2012]; “Describes the

human tendency to see human-like shapes in the environment”

[Złotowski et al., 2015]; The tendency for humans

to attribute human qualities to non-human entities

[Shevlin, 2025]
Anthropomimesis Robot

developer

/ designer
Robot developers design features

which mimic humanlike

qualities
Aesthetic anthropomimesis: Mimetic robots are designed

to resemble humans [Diamond et al., 2012]; Behavioural

anthropomimesis: The design and implementation

of human-like features in AI systems [Shevlin,

2025]; Substantive anthropomimesis: The anthropomimetic

principle describes a robot that “imitates not

just the human form, but also the biological structures

and functions that enable and constrain perception and

action—and describes the design, construction, and initial

performance of such a robot” [Holland et al., 2006];

The end of the word anthropomimesis originates from the Greek word “mimesis” for “imitation”. Shevlin [2025] makes a distinction between anthropomorphic and anthropomimetic non-embodied AI systems, in that anthropomimetic non-embodied AI systems are characterised by “features of the system itself”, rather than anthropomorphic systems which are characterised by “user responses” to that system (pp. 5). According to Shevlin [2025], the party responsible for the human-likeness in anthropomimesis is the system designer or developer, rather than the system perceiver or user in anthropomorphism (refer to Table 1 for further analysis). While Shevlin [2025] asserts that anthropomimetic features are consciously designed into non-embodied AI systems, we acknowledge that anthropomimemtic human-like features may be consciously or unconsciously designed into a robot’s form (i.e., appearance and embodiment), behaviour, or interactions. Anthropomimesis may be roughly described on three dimensions: aesthetic, behavioural, and substantive. Aesthetic anthropomimesis relates to physically observable qualities of the robot’s embodiment, form and appearance (cf. [Phillips et al., 2018]). Behavioural mimesis refers to robot behaviours which mimic human social and affective behaviours (cf. [Carpinella et al., 2017]). Substantive anthropomorphism refers to mimicking the biological structures of the human body, including joints and muscle-like actuators [Diamond et al., 2012].

Shevlin [2025] distinguishes between “weak” and “robust” anthropomimetic systems. They give the definition of a weak system as one where “‘humanlikeness’ is limited to surface-level features such as voice and interface”, with the example of ELIZA. ELIZA was an early chatbot system from the 1960s, which used simple keyword identification and imitation to mimic human- and therapist-like conversations [Weizenbaum, 1966].

3.2 Theoretical Implications

This work set out to disambiguate the theoretical concepts of anthropomorphism and anthropomimesis in Human-Robot Interaction. To our knowledge, it is the first work attempting to do so. We aim for this work to contribute toward more robust theoretical definitions and discussions regarding anthropomorphism and -mimesis in HRI and social robotics. We view that robot designers and users both being aware of which party is responsible for human-likeness in a given robot design or application is important for creating useful and meaningful human-likeness future robot designs, and for the distribution of accountability in situations where human-like robots cause harm. We hope that our theoretical contribution can aid future examination of these important factors.

3.3 Practical Implications

On a practical level, this work has implications for the design of HRI and social robots. HRI designers may wish to consider within their process when and how they are designing human-like features into their robots (i.e., anthropomimesis), and when and how any perceived human-like features are a result of user perceptions rather than design choices (i.e., anthropomorphism). Of course, both things may be true at once—for instance, a human-like robot design may elicit perceptions of human-likeness. Nevertheless, being aware of the difference in responsible parties may help robot designers become more aware of their human-like and non-human-like design choices, and robot users to become more aware of how their perceptions of a robot’s human-likeness or non-human-likeness is shaped.

4 Summary and Conclusion

This work set out to disambiguate the theoretical concepts of anthropomorphism and anthropomimesis in the field of Human-Robot Interaction (HRI). First, we discussed the definitions of and differences between the theoretical concepts of human-like robots, anthropomorphism, and anthropomimesis. Then, we discussed the limitations and implications of this theoretical disambiguation. This work aimed to contribute toward a more robust theoretical understanding of anthropomorphism (i.e., the perception of human-like qualities in robots) and anthropomimesis (i.e., the design of human-like qualities into robots). We highlighted the responsible parties for each concept:robot perceiver in the case of anthropomorphism, and robot designer in the case of anthropomimesis. We offer this contribution in order to contribute toward greater conceptual clarity, for the future design of HRI and social robots.