Trust in Human-AI Interaction: Scoping Out Models, Measures, and Methods
Trust has emerged as a key factor in people’s interactions with AI-infused systems. Yet, little is known about what models of trust have been used and for what systems: robots, virtual characters, smart vehicles, decision aids, or others. Moreover, there is yet no known standard approach to measuring trust in AI. This scoping review maps out the state of affairs on trust in human-AI interaction (HAII) from the perspectives of models, measures, and methods. Findings suggest that trust is an important and multi-faceted topic of study within HAII contexts. However, most work is under-theorized and under-reported, generally not using established trust models and missing details about methods
AI-infused systems. These are “systems that have features harnessing AI capabilities that are directly exposed to the end user” [1:1]. The AI capabilities are to make predictions, recommendations, or decisions influencing real or virtual environments, through learning, reasoning and self-correcting [15,57]. AI-infused systems is a broad category that can be applied to a range of technologies, including robots of all kinds, virtual agents, voice-based agents or agents with bodies, algorithms that provide even a small amount of interaction with people through an interface, and so on. We recognize that the degree to which people are aware that they are interacting with AI may vary. So, when we think of trust in HAII, we might be not only considering trust in the AI itself, but also trust in the entire AI-infused system. The factors include the representation, sociability, reputation and so on [49].
"the willingness of a party to be vulnerable to the actions of another party based on the expectation that the other will perform a particular action important to the trustor, irrespective of the ability to monitor or control that other party."
not yet know of any standard approach to measuring trust when people interact with these AI-infused systems