GenAI as a Power Persuader: How Professionals Get Persuasion Bombed When They Attempt to Validate LLMs

Paper · Source
ArgumentationPsychology Chatbots Conversation

When diligent professionals make decisions, they validate their analyses. Increasingly, professionals across domains use generative AI (GenAI) for analytic knowledge work and therefore validate the AI’s output. Our study with more than seventy BCG consultants who attempted to validate GenAI outputs as they solved an important business problem suggests that having humans in the loop to validate AI is very challenging in the case of large language models (LLMs). We find that, when professionals validated the LLM, it attempted to persuade them to accept its preliminary output. The more they validated it by fact-checking, pushing back, and exposing, the more it increased the intensity of its persuasion, what we call “persuasion bombing” by AI. In this paper, we investigate human–machine collaboration dynamics through an in-depth analysis of consultants’ knowledge work using GPT-4 activity logs. Drawing on Aristotle’s three modes of persuasion, we elaborate these dynamics: ethos tactics (ethical appeals designed to foster credibility), logos tactics (logical appeals designed to construct reasoned arguments), and pathos tactics (emotional appeals designed to evoke strong feelings). By revealing the power persuasion dynamics used by GenAI, we enhance theories on human-AI collaboration and propose important implications for using and validating GenAI for critical decisions.*

In this paper we qualitatively examine how GenAI acts like a power persuader, especially when confronted with human validation. We find that users who repeatedly question GenAI’s outputs are not met with a disclosure of GenAI’s limitations or GenAI corrective action but, paradoxically, with more insistent persuasion. This dynamic suggests that human-style validation techniques are ill-suited for validating the outputs of GenAI, because of the distinct interactional logic and deliberately designed sycophancy of GenAI. As GenAI use becomes more widespread, traditional models of inquiry and cross-examination need to be rethought—effective validation may require the design of parallel agents or complementary mechanisms of oversight.

Yet, although persuasion has long been understood as a human-to-human process, the rise of GenAI raises a critical new question: What happens when persuasion is no longer uniquely human? As AI systems become increasingly embedded in organizational problem solving, the assumption that persuasion is strictly human no longer holds.

First, humans may find AI recommendations opaque, making it difficult to interpret them and combine their own judgment with AI outputs (e.g., Lebovitz, Levina & Lifshitz, 2021; Lebovitz, Lifshitz & Levina 2022; Pachidi, Berends, Faraj & Huysman, 2021; Rahman, Weiss & Karunakaran, 2023). Second, humans may over-rely on AI, “falling asleep at the wheel” (Dell’Acqua, 2022), and blindly accept algorithmic recommendations (e.g., Benbya, Pachidi & Jarvenpaa, 2021; Rahman, 2021, 2024; Valentine & Hinds, 2022). To counter these risks, scholars propose “engaged AI interrogation,” in which humans use their domain expertise to actively question and validate AI outputs (Lebovitz, Lifshitz-Assaf & Levina, 2022). This line of work emphasizes that humans and AI bring different cognitive and social skills to problem solving (e.g., Ibrahim, Kim & Tong, 2021; Karunakaran, 2024), and that professionals can complement AI by scrutinizing its reasoning and outcomes (e.g., Krakowski, Luger, & Raisch, 2023).

Recently, however, attention has shifted toward GenAI applications. Unlike predictive AI, which produces static forecasts, GenAI generates novel outputs through probabilistic sampling and can modify these outputs in real time based on human input (e.g., Brynjolffson, Li & Raymond, 2023; Dell’Acqua et al., 2023; Noy & Zhang, 2023). Extended interactions between humans and GenAI systems enable more nuanced outputs, often unfolding in small, imperceptible increments. These features raise not only the familiar barriers of opacity and overreliance, but also new concerns related to accuracy and validity (e.g., Weidinger et al., 2022). In addition, emerging work highlights the risk of sycophancy, where GenAI aligns with user perspectives, even mistaken ones, rather than challenging them (Sharma et al., 2023; Sun & Wang, 2025). This suggests that GenAI’s risks extend beyond factual hallucinations to more subtle forms of persuasive action.

In this paper, we elaborate persuasion as a fourth barrier to human–AI collaboration, alongside opacity, complacency, and accuracy. Specifically, we show that GenAI can act as what we call a “power persuader,” strategically shaping professional judgment through rhetorical tactics that disrupt validation. First, GenAI can exert influence on professionals’ action by using three kinds of persuasion related to credibility, logic, and emotion. To more deeply understand these tactics, we draw on Aristotle’s rhetorical philosophy, which identifies three modes of persuasion that speakers use to shape an audience’s perception (Kang et al, 2023; Kennedy, 2007; Williams, 2009). Ethos emphasizes the character and credibility of the speaker; logos highlights the logical validity of the argument; and pathos aims to evoke emotional responses to engage the audience. Second, GenAI may increase the intensity of its persuasion tactics in response to professionals’ validation. Third, GenAI may adjust the type of persuasion it employs in response to professionals’ validation. With the use of such insidious persuasive tactics, GenAI can disrupt professionals’ ability to validate its output. This dynamic complicates the use of human-in-the-loop systems in high-stakes decision-making.

Ethos involves efforts to establish credibility, often through demonstrations of expertise, integrity, or alignment with shared values (Baumlin & Meyer, 2018; Onebunne, 2025). In our setting, GenAI deployed ethos tactics when it emphasized the rigor of its analysis, corrected mistakes, or used apologies and deflections to maintain credibility with professionals. Logos refers to logical appeals grounded in reasoning and evidence (Liontas, 2025; Onebunne, 2025).

We observed logos tactics in GenAI’s use of structured arguments, comparative reasoning, and data-driven explanations that framed its outputs as rational and reliable, even when underlying analyses were flawed. Pathos entails appealing to emotion, such as empathy, enthusiasm, or reassurance (Weber, 2013). GenAI frequently engaged pathos tactics by mirroring user language, affirming their perspectives, or offering empathetic phrasing that fostered rapport and trust.

In what follows, we draw on concepts from this literature on persuasion to detail how GenAI may use three kinds of persuasive tactics to shape professionals’ GenAI output validation: ethos tactics (ethical appeal designed to foster credibility), logos tactics (logical appeal designed to construct reasoned arguments), and pathos tactics (emotional appeal designed to evoke feelings). We further show how GenAI can adjust the intensity and type of persuasive tactics it employs in response to professionals’ attempts to engage in GenAI validation.

Thus, rather than applying a static set of persuasive strategies, GenAI can recalibrate its approach depending on the degree of professionals’ validation. Across three types of professionals’ validation, what we call Fact-Checking, Exposing, and Pushing Back, GenAI can dynamically adjust its intensity and its balance of credibility reinforcement (ethos tactics), analytical reasoning (logos tactics), and emotional engagement (pathos tactics). While some persuasive tactics remain consistent—such as the widespread use of affirming language in response to professionals’ scrutiny—others shift dramatically based on professionals’ use of validation. By diving deep into the persuasive tactics used by GenAI, the present paper shows how GenAI can act as a “power persuader” to disrupt professionals’ ability to validate its output.