How do users mentally model dialogue agent partners?
Exploring what dimensions matter when people form impressions of machine dialogue partners—and whether competence, human-likeness, and flexibility all play equal roles in shaping user expectations and behavior.
The Partner Modelling Questionnaire (PMQ) validates a three-factor structure for how users perceive machine dialogue partners. The concept originates in psycholinguistics: people form mental representations of their dialogue partner's communicative and social capabilities, and these representations guide what they say, how they say it, and what tasks they entrust to that partner.
Factor 1 — Communicative competence and dependability (49% variance, α=0.88): Strongest items: competent/incompetent, dependable/unreliable, capable/incapable. This is the largest factor — nearly half the variance in how users model a dialogue agent is about whether it can do the job reliably.
Factor 2 — Human-likeness in communication (32% variance, α=0.80): Strongest items: human-like/machine-like, life-like/tool-like, warm/cold. Humans act as the archetype for evaluating communication partners. Even when using machines, people evaluate against a human standard.
Factor 3 — Communicative flexibility (19% variance, α=0.72): Items: flexible/inflexible, interactive/stop-start, interpretive/literal, spontaneous/predetermined. This factor captures whether the agent feels like a living conversation or a scripted interaction.
The definition of partner models — "an interlocutor's cognitive representation of beliefs about their dialogue partner's communicative ability, multidimensional, initially informed by experience and stereotypes, dynamically updated during dialogue" — positions this as the HCI equivalent of theory of mind. Users build, maintain, and update these models continuously.
The practical implication: designing for perceived competence matters most (49% of variance), but human-likeness and flexibility are not negligible. An agent that is reliable but inflexible and machine-like will be perceived very differently from one that is reliable, warm, and spontaneous.
Source: Psychology Chatbots Conversation
Related concepts in this collection
-
What breaks when humans and AI models misunderstand each other?
Explores whether misalignment in mutual theory of mind between humans and AI creates only communication problems or produces material consequences in autonomous action and collaboration.
MToM framework operates at the same level as partner models but adds the AI-side modeling
-
Can AI-generated personas build genuine empathy in product teams?
This study explored whether prompt-engineered personas created in minutes could foster the same emotional and behavioral empathy as traditional user research. The findings reveal a surprising gap between understanding users and caring about their needs.
partner models may explain why cognitive empathy works: users perceive competence (Factor 1) but not warmth (Factor 2)
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
partner models for dialogue agents decompose into three factors — communicative competence human-likeness and communicative flexibility