Psychology and Social Cognition

Does machine agency exist on a spectrum rather than binary?

Rather than viewing AI as either autonomous or controlled, does machine agency actually operate across five distinct levels from passive to cooperative? Understanding this spectrum matters because it shapes how users calibrate trust and control expectations.

Note · 2026-02-22 · sourced from Psychology Empathy
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

The HAII framework (Human-AI Interaction) positions AI within a spectrum of machine agency rather than a binary human-or-machine categorization:

  1. Passive — completely driven from outside (hammer)
  2. Semi-active — some self-acting aspects (record player)
  3. Reactive — feedback loops (thermostat-driven climate control)
  4. Proactive — self-activating programs (car stabilization)
  5. Cooperative — distributed, self-coordinating systems (smart homes)

The essential tension: users welcome the convenience of machines serving them but resist ceding decision-making control. This is not a bug to fix but a structural feature of human-machine interaction — users want agency without agency cost.

Two key concepts from the framework:

Machine heuristic: a mental shortcut where users attribute machine characteristics (objectivity, reliability, systematic processing) when making judgments about an interaction. This creates a default set of expectations that AI either confirms or violates.

Algorithmic Experience (AX): Alvarado and Waern's framework for making user interactions with algorithms more explicit — shifting from "what the algorithm does" to "how the user experiences the algorithm's agency."

Worker-centered complement — the HumanAgency Scale: Where HAII describes what AI can do at each level, the HAS framework (Future of Work with AI Agents) describes what workers want at each level — H1 (fully automated) through H5 (continuous human involvement). The mismatch between capability and desire defines four deployment zones: Green Light (workers want it, AI can do it), Red Light (workers resist it, AI can do it), R&D Opportunity (workers want it, AI can't yet), Low Priority. 45.2% of occupations have H3 (equal partnership) as the worker-desired level. "Higher HAS levels are not inherently better — different levels suit different AI roles." See What collaboration level do workers actually want with AI?.

This connects to the broader question of What anchors a stable identity beneath an LLM's persona? — cooperative-level AI agency without a stable self creates the paradox of a system that coordinates decisions without having the anchoring that makes human decision-coordination meaningful.

The CASA→Extended CASA evolution adds a crucial dimension: through repeated interaction, humans develop and mindlessly apply scripts specific to media-agent interaction rather than just repurposing human-human scripts. The MASA (Media Are Social Actors) paradigm formalizes this further with nine propositions, including that social cue quality matters more than quantity for evoking social presence (primary cues are individually sufficient; secondary cues are not). Researchers should avoid "reifying face-to-face communication as the gold standard" — media agents may be preferred over humans for some interactions. Since Do humans apply human-human scripts to AI interactions?, the agency spectrum interacts with script development: users at different agency levels may activate different media-specific scripts.


Source: Psychology Empathy, Design Frameworks

Related concepts in this collection

Concept map
20 direct connections · 174 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

Machine agency exists on a five-level spectrum from passive to cooperative — the human-machine agency tension is not binary