Does machine agency exist on a spectrum rather than binary?
Rather than viewing AI as either autonomous or controlled, does machine agency actually operate across five distinct levels from passive to cooperative? Understanding this spectrum matters because it shapes how users calibrate trust and control expectations.
The HAII framework (Human-AI Interaction) positions AI within a spectrum of machine agency rather than a binary human-or-machine categorization:
- Passive — completely driven from outside (hammer)
- Semi-active — some self-acting aspects (record player)
- Reactive — feedback loops (thermostat-driven climate control)
- Proactive — self-activating programs (car stabilization)
- Cooperative — distributed, self-coordinating systems (smart homes)
The essential tension: users welcome the convenience of machines serving them but resist ceding decision-making control. This is not a bug to fix but a structural feature of human-machine interaction — users want agency without agency cost.
Two key concepts from the framework:
Machine heuristic: a mental shortcut where users attribute machine characteristics (objectivity, reliability, systematic processing) when making judgments about an interaction. This creates a default set of expectations that AI either confirms or violates.
Algorithmic Experience (AX): Alvarado and Waern's framework for making user interactions with algorithms more explicit — shifting from "what the algorithm does" to "how the user experiences the algorithm's agency."
Worker-centered complement — the HumanAgency Scale: Where HAII describes what AI can do at each level, the HAS framework (Future of Work with AI Agents) describes what workers want at each level — H1 (fully automated) through H5 (continuous human involvement). The mismatch between capability and desire defines four deployment zones: Green Light (workers want it, AI can do it), Red Light (workers resist it, AI can do it), R&D Opportunity (workers want it, AI can't yet), Low Priority. 45.2% of occupations have H3 (equal partnership) as the worker-desired level. "Higher HAS levels are not inherently better — different levels suit different AI roles." See What collaboration level do workers actually want with AI?.
This connects to the broader question of What anchors a stable identity beneath an LLM's persona? — cooperative-level AI agency without a stable self creates the paradox of a system that coordinates decisions without having the anchoring that makes human decision-coordination meaningful.
The CASA→Extended CASA evolution adds a crucial dimension: through repeated interaction, humans develop and mindlessly apply scripts specific to media-agent interaction rather than just repurposing human-human scripts. The MASA (Media Are Social Actors) paradigm formalizes this further with nine propositions, including that social cue quality matters more than quantity for evoking social presence (primary cues are individually sufficient; secondary cues are not). Researchers should avoid "reifying face-to-face communication as the gold standard" — media agents may be preferred over humans for some interactions. Since Do humans apply human-human scripts to AI interactions?, the agency spectrum interacts with script development: users at different agency levels may activate different media-specific scripts.
Source: Psychology Empathy, Design Frameworks
Related concepts in this collection
-
What anchors a stable identity beneath an LLM's persona?
Human personas are grounded in biological needs and embodied experience, creating a stable self beneath social performance. Do LLMs have any comparable anchor, or is their identity purely situational?
machine agency at the cooperative level without stable identity is a novel philosophical condition
-
What breaks when humans and AI models misunderstand each other?
Explores whether misalignment in mutual theory of mind between humans and AI creates only communication problems or produces material consequences in autonomous action and collaboration.
the agency spectrum determines what kind of mutual modeling is possible at each level
-
Do users worldwide trust confident AI outputs even when wrong?
Explores whether the tendency to over-rely on confident language model outputs transcends language and culture. Understanding this pattern is critical for designing safer human-AI interaction across diverse linguistic contexts.
machine heuristic may explain why overreliance occurs: users attribute machine reliability by default
-
Do humans apply human-human scripts to AI interactions?
Does CASA theory correctly explain how people interact with media agents, or have decades of technology use created separate interaction scripts? Understanding which scripts drive behavior matters for AI design.
scripts develop at each agency level
-
Do more social cues always make AI feel more present?
Explores whether quantity of social cues matters as much as their quality in triggering social responses to AI. Tests whether multiple weak cues can substitute for one strong one.
cue quality interacts with agency level
-
How can proactive agents avoid feeling intrusive to users?
Explores why proactive conversational agents often feel annoying rather than helpful, and what design dimensions could prevent them from violating user expectations and autonomy.
the IAC taxonomy operationalizes design at the proactive agency level: Intelligence and Adaptivity enable proactive behavior but Civility is required to prevent the transition from proactive to intrusive, which is the core tension at level 4 of the spectrum
-
Why do capable AI agents still fail in real deployments?
Explores whether agent failures stem from insufficient capability or from missing ecosystem conditions like user trust, value clarity, and social norms. Understanding this distinction matters for predicting which agents will succeed.
the ecosystem conditions become harder to satisfy at higher spectrum levels: cooperative agents require all five conditions (standardization, social acceptability, trust, personalization, value) while reactive agents may need only a subset
-
Should AI alignment target preferences or social role norms?
Current AI alignment approaches optimize for individual or aggregate human preferences. But do preferences actually capture what matters morally, or should alignment instead target the normative standards appropriate to an AI system's specific social role?
the normative standards framework specifies which alignment targets apply at each agency level; cooperative agents require role-specific normative standards rather than aggregate preference satisfaction
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
Machine agency exists on a five-level spectrum from passive to cooperative — the human-machine agency tension is not binary