Thinking—Fast, Slow, and Artificial: How AI is Reshaping Human Reasoning and the Rise of Cognitive Surrender
For decades, dual-process theories of judgment and decision-making have served as a foundational framework for modeling cognitive processes. These theories propose two distinct decision-making processes: System 1, characterized by fast, intuitive, and affective processing, and System 2, defined by slow, deliberative, and analytical reasoning (Kahneman, 2011; Stanovich & West, 2000). While simplistic and not immune to criticism (Keren & Schul, 2009; Kruglanski & Gigerenzer, 2011; Melnikoff & Bargh, 2018), this distinction explains a wide range of phenomena—from reliance on heuristics and rapid choice to moral judgment (Greene et al., 2001), risk perception (Slovic et al., 2004), motivated reasoning (Kunda, 1990), susceptibility to misinformation (Pennycook & Rand, 2019; 2020), stereotype activation (Devine, 1989), confidence calibration (Thompson et al., 2011), and reasoning under cognitive load (De Neys, 2006).
Recent advances in AI expose a conceptual gap in dual-process theories: they presume cognition is confined to the biological mind. Today’s society increasingly relies on external cognitive agents (such as algorithms, recommender systems, and generative AI) to interpret information, form judgments, and guide decisions. Whether generating a travel itinerary with ChatGPT, following Google Maps through unfamiliar streets, or accepting a dating algorithm’s match, individuals routinely delegate reasoning components to machines. This goes beyond the well-established tendency to conserve effort as “cognitive misers” (Kahneman, 2011; Stanovich & West, 2000) and reflects a distinct phenomenon we call “cognitive surrender”: the decisionmaker no longer constructs an answer, but adopts one generated by an external system.
Conceptually distinct from cognitive offloading (Risko & Gilbert, 2016), which involves strategically outsourcing a discrete task to an external tool (e.g., using a calculator), cognitive surrender represents a deeper abdication of critical evaluation, where the user relinquishes cognitive control and adopts the AI's judgment as their own. This raises several fundamental questions: How is AI integrated into decision-making processes? Who engages with it, when, and why? How does access to AI affect decision confidence and outcomes? Moreover, what happens when core building blocks of thought, such as inference, evaluation, and justification, are outsourced to artificial systems? Addressing these questions requires shifting from a purely consequentialist view (how AI alters outcomes) to a process-oriented lens: understanding how people engage artificial cognition, which cognitive paths they follow (e.g., surrender, offloading, override), who is most susceptible or resistant, and what conditions modulate these patterns.
This paper proposes a novel framework for understanding judgment and decision-making in the AI era: Tri-System Theory. Building on dual-process models, Tri-System Theory introduces a third cognitive system: System 3 (i.e., artificial cognition), defined as external, automated, data-driven reasoning originating from algorithmic systems rather than the human mind. While System 1 (intuition) and System 2 (deliberation) are internal processes shaped by individual experience, emotion, and logic, System 3 exists outside the self and operates through statistical inference, pattern recognition, and machine learning. We argue that System 3 is not merely a tool that supports cognition but an active participant in cognitive processes. Decisionmakers now use it not only to supply fast answers (pre‑empting or suppressing System 1) and circumvent effortful thinking (short‑circuiting System 2) but also to supplement and scaffold their reasoning. For example, System 3 may generate candidate options that feed into System 2 deliberation or flag contradictions that prompt users to re‑evaluate an initial System 1 intuition. However, the same affordances can lead to a deeper transfer of agency, where System 3’s outputs are adopted without verification, effectively substituting for judgment altogether: a state we term cognitive surrender.
Broadly put, Tri-System Theory posits that modern decision‑making unfolds within a triadic cognitive ecology rather than a purely internal dual‑process system. In this view, external, algorithmic cognition does not merely support intuition or deliberation; it can actively supplant, suppress, or augment them, altering the cost–benefit calculus of thinking itself. When System 3’s outputs are fast, fluent, and seemingly authoritative, users may bypass effortful reasoning and adopt its answers as their own. Conversely, under certain conditions (e.g., when outputs violate expectations or introduce disfluency), System 3 can trigger greater deliberation, creating hybrid routes such as verify‑then‑adopt or override‑then‑rationalize. This framework treats AI not as a passive aid but as a functional cognitive agent whose presence reshapes how and when internal systems engage. It complements and extends adjacent concepts such as the extended mind (Clark & Chalmers, 1998), automation bias (Mosier & Skitka, 1996), epistemic outsourcing (Lynch, 2016), and transactive memory systems (Wegner, 1987), but goes further by modeling dynamic delegation as the decision to offload, trust, or override AI reasoning in real time. Tri-System Theory thus provides a richer vocabulary and a structural revision of cognitive architecture for explaining not just what decisions are made, but how they are formed, and whose cognition they reflect.