What the F*ck Is Artificial General Intelligence?

Paper · arXiv 2503.23923 · Published March 31, 2025
Philosophy SubjectivitySocial Theory Society

I’ll begin by defining intelligence and AGI. There are a number of positions [6, 2, 7–12]. Some peg AGI to human-level performance across a broad range of tasks [13, 1]. This is is intuitive, but anthropocentric and hard to quantify2. Chollet argues intelligence is a measure of the ability to generalise and acquire new skills. He argues AGI can do this at least as well as a human [11]. He attempts to quantify the ability to acquire new skills, which can encompass the aforementioned anthropocentric definition. His formalism resembles Legg-Hutter intelligence. Legg and Hutter argued intelligence is an ability to satisfy goals in a wide range of environments [10]3. Chollet’s definition descends from Legg- Hutter. It is based on Ockham’s Razor. They both use Kolmogorov complexity. They both equate simplicity with generality. They both seek to quantify intelligence, and they are both highly subjective because they treat intelligence as a property of software interacting with the world through an interpreter [15–18].

That a problem. Why? Because if I develop an AI for some purpose, then I decide whether it has fulfilled that purpose, and I am part of the agent’s environment. The environment is where objective success or failure is decided. Assume C is a space of software programs, and Γ is a space of behaviours. Imagine f1 ∈ C is AI software, f2 : C → Γ is the hardware on which it runs, and f3 : Γ → {0, 1} is the environment (including me) where success is decided. Success is a matter of f3(f2(f1)). The behaviour of f3(f2(f1)) can be changed by changing f2 or f3 [12]. It is pointless to make claims about f3(f2(f1)) based on f1 alone. f1 and f2 are like mind and body. Every choice of embodiment biases the system in some way. Each movement it makes constrains the space of possibilities, much like a constraint expressed in a formal language. Complexity is a property of how a body interprets information [18]. The choice of Universal Turing Machine can make any software agent optimal according to Legg-Hutter intelligence [17].

The idea of AI as a software mind is called computational dualism [12]4. It is a reference to the work of Descartes, who in 1637 argued the pineal gland mediates between mind and body. AI researchers have exchanged the pineal gland for a Turing machine. So what is the alternative? Wang defines intelligence as adaptation with limited resources [6]. This leaves room for us to avoid dualism, and it implies the ability to satisfy goals in a wide range of environments anyway [12].

An attempt was made to resolve computational dualism and formalise intelligence as objective adaptability. It does so by formalising software, hardware and environment together [12]. It formalises intelligence as a measure of the ability to complete a wide range of tasks [21]. This dispenses with the separation of goals and intelligence in favour of a whole-of-system model that treats the purpose of a system as what it does. One’s body implies a set of goals and subgoals. Body, environment and goals together form a task, by which I mean a purpose and a means of fulfilling it. If A completes a superset of tasks that B completes, then A is more adaptable than B. This encompasses both sample and energy efficiency. It is how fast a system can adapt and how much energy it needs to do so. This is the definition I will use for this survey. I’ll consider an AGI to be a system that adapts at least as generally as a human scientist [22]. An artificial scientist can prioritise, plan and perform useful experiments. This requires autonomy, agency, motives, an ability to learn cause and effect and the ability to balance exploring to acquire knowledge with acting to profit from it [23, 9, 8, 24, 25].

Artificial intelligence (AI) and machine learning (ML) are typically divided up into buckets like supervised learning, reinforcement learning, regression, classification, planning and so on. These are not useful categories for AGI, because an artificial scientist must be able to do all of these things. Instead, I will take my cue from Sutton’s Bitter Lesson. It acknowledges that generally applicable tools can be used to learn any behaviour [26], if we scale up resources (compute, memory, data etc).