Rise of Machine Agency: A Framework for Studying the Psychology of Human–AI Interaction (HAII)
Communication scholars began studying our interactions with the technologies themselves. Several studies documented our tendency to treat computers as if they are autonomous social actors (Reeves & Nass, 1996), to feel transported into artificially created mediated spaces (Lombard & Ditton, 1997) and to even become one with the interface of the technology as in the case of a cyborg (Biocca, 1997), among numerous other effects of interacting directly with computer-based media. This area of research tends to be categorized as human–computer interaction (HCI), and is sometimes contrasted with CMC in terms of the locus of users’ source orientation—while we orient to other human sources in CMC, we orient to the computer as the source in HCI (Sundar & Nass, 2000).
Such distinctions have blurred somewhat in the age of mobile and social media, as users seamlessly interact with both the interfaces and other humans, often leveraging interface features to augment direct individual interactions with media themselves as well as interpersonal, group and mass communications.
scholars have shifted their focus from the locus of communication to the “affordances”1 of mediation technologies, by asking questions like: What can technology afford? How can technology enable human action, help humans, enhance humans, and how can we use technology for human needs and ends?
The technology that enables users to customize and thereby provide proxy agency to users is becoming increasingly capable of exerting its own agency, thanks to advancements in machine learning and artificial intelligence (AI).
AI is the autonomous application of these rules by a system for adaptively achieving specific goals, such as making a decision or offering a recommendation, e.g., a plug-in that proactively alerts users about an incoming story in their social-media news feed as being fake.
Findings like this signal the essential tension between machine agency and human agency. While users appreciate—indeed welcome—the convenience of machines serving them, they are hesitant to cede decision-making control to them. As Rammert (2008) notes, machines differ in the degree to which they usurp agency, ranging from passive (which are completely driven from outside, e.g., hammer) and semi-active machines (which have some self-acting aspects, e.g., record player) to re-active (systems with feedback loops, such as a thermostat-driven climate control), pro-active (self-activating programs, e.g., car stabilization) and co-operative ones (distributed and self-coordinating systems such as smart homes).
HCI and CMC theories focusing on specific variables, ranging from anonymity and customization to depersonalization and interactivity, could be employed to study emergent media, but with a focus on the specific affordances of AI and concerns surrounding the rise of machine agency. The Theory of Interactive Media Effects ([TIME]; Sundar, Jia, Waddell, & Huang, 2015) is ideally suited for this purpose because it focuses on the effects of technological affordances in digital media, which are the primary independent variables of interest.
These stereotypes form the basis of “machine heuristic,” which is amental shortcut whereby we attribute machine characteristics when making judgments about an interaction
Alvarado and Waern (2018) propose the concept of Algorithmic Experience (AX) as an analytic framework for making user interactions with algorithms more explicit.