Do We Trust ChatGPT as much as Google Search and Wikipedia?
Understanding how users perceive content from generative AI tools is crucial because it can help reduce unwarranted trust in inaccurate information and mitigate the spread of misinformation. A focus group and interview study (N=14) revealed that thankfully not all users trust ChatGPT-generated information as much as Google Search and Wikipedia. It also shed light on the primary psychological considerations when trusting an online information source, namely perceived gatekeeping, and perceived information completeness.
This is because most users most of the time do not have the bandwidth to carefully examine the veracity of the information themselves; they rely on the credibility of the source instead to decide whether or not they should believe in the information provided by that source. When Wikipedia was introduced, many individuals had concerns about the credibility of its information because of the open-source nature of Wikipedia articles whose sources are unknown [16], and the absence of a centralized editorial review [7]. Similarly, scholars examined why individuals trust information from Google search results soon after its launch and what features contribute to their decisions
Perceived information completeness pertains to how users perceive the comprehensiveness and inclusiveness of information [5]. Previous studies indicate that source characteristics can impact the perception of information completeness [12]. If the source provides diverse perspectives, the information is considered complete [2]. Some scholars consider information completeness under the umbrella of information quality, with design, readability, accuracy, comprehensiveness, coverage, and scope as contributing components [8]. Therefore, perceived information completeness can be influenced not only by how the source generated the information but also by the quality and format of the information.
Perceived gatekeeping and perceived information completeness are but two examples of a plethora of mediators that could explain the effects of sources on users’ trust in the information provided by the sources. To explore them, we conducted a focus group as an exploratory step toward building a comprehensive model for understanding relative differences in user trust across different online sources.
ChatGPT. Interface features that afford conversationality appeared to contribute to trust in ChatGPT because it makes the source seems more social (see Table 4). For technologies to evoke social responses from users, they must be interactive, use natural language, and fulfill roles that were traditionally performed by humans [23]. ChatGPT’s unique characteristics such as chat functionality and human-like natural language seem to make it more interactive, leading users to respond to it socially. Focus group comments echoed this sentiment. It was clear that participants valued the contingency and conversationality of ChatGPT. For instance, P11 commented: “ChatGPT already knows what I’m talking about and connects my two questions.” P2 mentioned: “I wanted to use it as kind of a language buddy.” Participants also appreciated the speed and directness of ChatGPT’s responses. As P12 noted, “I really turn to ChatGPT when I need somebody who just gets straight to the point and just figures out what the exact answer I’m looking for is.” This sentiment was echoed by P6 who said, “When I use GPT, it goes straight to the answer, which is something that I really like" Also, the participants mentioned that the organized format of the information or the level of detail makes them trust ChatGPT.