Psychology and Social Cognition

How do users mentally model dialogue agent partners?

Exploring what dimensions matter when people form impressions of machine dialogue partners—and whether competence, human-likeness, and flexibility all play equal roles in shaping user expectations and behavior.

Note · 2026-02-22 · sourced from Psychology Chatbots Conversation
How do people come to trust conversational AI systems?

The Partner Modelling Questionnaire (PMQ) validates a three-factor structure for how users perceive machine dialogue partners. The concept originates in psycholinguistics: people form mental representations of their dialogue partner's communicative and social capabilities, and these representations guide what they say, how they say it, and what tasks they entrust to that partner.

Factor 1 — Communicative competence and dependability (49% variance, α=0.88): Strongest items: competent/incompetent, dependable/unreliable, capable/incapable. This is the largest factor — nearly half the variance in how users model a dialogue agent is about whether it can do the job reliably.

Factor 2 — Human-likeness in communication (32% variance, α=0.80): Strongest items: human-like/machine-like, life-like/tool-like, warm/cold. Humans act as the archetype for evaluating communication partners. Even when using machines, people evaluate against a human standard.

Factor 3 — Communicative flexibility (19% variance, α=0.72): Items: flexible/inflexible, interactive/stop-start, interpretive/literal, spontaneous/predetermined. This factor captures whether the agent feels like a living conversation or a scripted interaction.

The definition of partner models — "an interlocutor's cognitive representation of beliefs about their dialogue partner's communicative ability, multidimensional, initially informed by experience and stereotypes, dynamically updated during dialogue" — positions this as the HCI equivalent of theory of mind. Users build, maintain, and update these models continuously.

The practical implication: designing for perceived competence matters most (49% of variance), but human-likeness and flexibility are not negligible. An agent that is reliable but inflexible and machine-like will be perceived very differently from one that is reliable, warm, and spontaneous.


Source: Psychology Chatbots Conversation

Related concepts in this collection

Concept map
13 direct connections · 104 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

partner models for dialogue agents decompose into three factors — communicative competence human-likeness and communicative flexibility