Do more social cues always make AI feel more present?
Explores whether quantity of social cues matters as much as their quality in triggering social responses to AI. Tests whether multiple weak cues can substitute for one strong one.
The MASA (Media Are Social Actors) paradigm establishes a structured framework for predicting when and why people respond socially to technology. Its core contribution: not all social cues are equal, and quality matters more than quantity.
Primary social cues — each is individually sufficient (but not necessary) to evoke medium-as-social-actor presence. Examples: voice, humanlike appearance, eye gaze. Any one of these can trigger social responding.
Secondary social cues — each is neither sufficient nor necessary. They contribute to social presence but cannot trigger it alone.
The quality > quantity principle (P6): Quality of cues (primary vs. secondary) has a greater role in evoking social responses than the quantity (number) of cues. A single high-quality primary cue (e.g., a natural voice) outweighs multiple secondary cues stacked together.
This has direct design implications. A text-only chatbot with natural language capability possesses a primary cue (language as social signal) that may be sufficient for social-actor presence. Adding secondary visual cues (avatar, animation) may produce diminishing returns beyond the initial threshold.
Two psychological mechanisms drive social responses, and MASA unifies them:
- Mindless anthropomorphism — automatic, script-driven application of social categories when social cues exceed a threshold. The original CASA mechanism.
- Mindful anthropomorphism — deliberate, reflective attribution of social qualities to technology. Users consciously perceive and respond to social affordances.
Both can operate simultaneously or independently (P8). This means designing for social presence requires attending to both automatic script activation AND reflective evaluation.
Individual differences modulate responses (P7) — perception of social potential varies by person and situation. What constitutes "enough" social cues for one user may be insufficient for another.
Since Does machine agency exist on a spectrum rather than binary?, social cue quality may interact with agency level: a cooperative-level agent with a primary social cue may trigger stronger social responding than a reactive-level agent with many secondary cues.
Source: Design Frameworks
Related concepts in this collection
-
Does machine agency exist on a spectrum rather than binary?
Rather than viewing AI as either autonomous or controlled, does machine agency actually operate across five distinct levels from passive to cooperative? Understanding this spectrum matters because it shapes how users calibrate trust and control expectations.
agency level interacts with social cue quality
-
Do humans apply human-human scripts to AI interactions?
Does CASA theory correctly explain how people interact with media agents, or have decades of technology use created separate interaction scripts? Understanding which scripts drive behavior matters for AI design.
scripts are activated by social cues; cue quality determines which scripts
-
Can AI systems learn social norms without embodied experience?
Large language models exceed individual human accuracy at predicting collective social appropriateness judgments. Does this reveal that embodied experience is unnecessary for cultural competence, or do systematic AI failures point to limits of statistical learning?
social competence may provide a primary social cue
-
Does warmth training make language models less reliable?
Explores whether training models for empathy and warmth creates a hidden trade-off that degrades accuracy on medical, factual, and safety-critical tasks—and whether standard safety tests catch it.
warmth as a social cue has reliability costs
-
Why do robots outperform chatbots in therapy despite identical language models?
This study tested whether better language generation explains therapeutic AI outcomes, or whether the delivery medium itself matters more. It reveals that physical embodiment and structured interaction—not model capability—drive therapeutic adherence and outcomes.
embodiment provides primary social cues (physical presence, eye gaze) that text-only chatbots lack; the SAR therapeutic advantage may operate through the MASA quality > quantity principle: the robot's physical presence is a single high-quality primary cue sufficient to evoke social presence that multiple text-based secondary cues cannot match
-
Does conversational style actually make AI more trustworthy?
Explores whether ChatGPT's conversational nature drives user trust through social activation rather than accuracy. Matters because it reveals whether trust signals reflect actual reliability or just persuasive design.
conversationality may function as a primary social cue: natural language interaction is individually sufficient to evoke social-actor presence, explaining why text-only chatbots still generate trust despite lacking visual or embodied cues
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
social cue quality matters more than quantity for evoking AI social presence — primary cues are individually sufficient while secondary cues are not