Does software intelligence exist independent of hardware and environment?
Most AGI formalisms (Legg-Hutter, Chollet) treat intelligence as a software property measurable in isolation. But can we really evaluate intelligence without considering the physical system and the evaluator making the judgment?
Most influential AGI formalism treats intelligence as a property of software, evaluated in terms of that software's capacity to generalize and acquire new skills. Legg-Hutter: intelligence is the ability to satisfy goals in a wide range of environments. Chollet: intelligence measures skill-acquisition efficiency, quantified using Kolmogorov complexity to reward simple, general solutions.
The What the F*ck Is AGI paper identifies the common error: these formalisms measure f1 (software) independently of f2 (hardware/embodiment) and f3 (environment, including evaluators). But success is determined by f3(f2(f1)) — the behavior of the whole system. This means:
- The choice of hardware (f2) biases what behaviors are possible and efficient. "Every choice of embodiment biases the system in some way." An LLM on a CPU, on a GPU cluster, in a mobile device, in a robotic body — these are different intelligences, not the same software in different containers.
- The choice of environment (f3) determines what counts as success. Since f3 includes the humans who evaluate whether the AI has "succeeded," evaluation is not external to intelligence — it is constitutive of it.
- The choice of Universal Turing Machine representation can make any software agent optimal according to Legg-Hutter intelligence, showing the measure is relative to implementation choices.
This is computational dualism — the AI equivalent of Cartesian substance dualism, which Descartes resolved via the pineal gland. AI researchers have exchanged the pineal gland for a Turing machine: a magic interface between mind (software) and body (hardware). The exchange doesn't resolve the problem; it relocates it.
Wang's alternative — intelligence as "adaptation with limited resources" — avoids dualism by making intelligence a relational property of the whole system: adaptation requires resources that are constrained by embodiment, and success is determined by the environment. This is formally consistent with the whole-system account f3(f2(f1)).
The implication for AGI claims: a demonstration of intelligence by f1 alone (software on a benchmark) is not a demonstration of AGI. AGI would require generalization across embodied, resource-constrained, environmentally-embedded settings. Current evaluations are, at best, measuring a component.
The formal argument elaborated: "Assume C is a space of software programs, and Γ is a space of behaviours. Imagine f1 ∈ C is AI software, f2 : C → Γ is the hardware on which it runs, and f3 : Γ → {0, 1} is the environment (including me) where success is decided. Success is a matter of f3(f2(f1)). The behaviour of f3(f2(f1)) can be changed by changing f2 or f3. It is pointless to make claims about f3(f2(f1)) based on f1 alone. f1 and f2 are like mind and body." Both Legg-Hutter and Chollet definitions share the same vulnerability: they use Kolmogorov complexity, equating simplicity with generality, and they are "highly subjective because they treat intelligence as a property of software interacting with the world through an interpreter." The proposed alternative treats an AGI as "a system that adapts at least as generally as a human scientist" — requiring autonomy, agency, motives, causal learning, and exploration-exploitation balance, all of which are inherently whole-system properties.
Source: Philosophy Subjectivity
Related concepts in this collection
-
What makes linguistic agency impossible for language models?
From an enactive perspective, does linguistic agency require embodied participation and real stakes that LLMs fundamentally lack? This matters because it challenges whether LLMs can truly engage in language or only generate text.
enactive view makes the same point for linguistic agency: the body and environment are not incidental; they constitute the possibility space
-
Can disembodied language models ever qualify as conscious?
Explores whether current LLMs lack the conditions needed for consciousness discourse to even apply, not because they're definitely not conscious but because they lack the shared embodied world that grounds consciousness language.
consciousness and AGI both require embodied whole-system accounts, not software-only evaluations
-
What capabilities do AI systems need for autonomous science?
Explores whether current AI benchmarks actually measure what's required for independent scientific research—hypothesis generation, experimental design, data analysis, and self-correction—or if they test only adjacent skills.
the four-capability checklist is also a whole-system demand; benchmarks measure components
Click a node to walk · click center to open · click Open full network for a force-directed map
Original note title
agi definitions that treat intelligence as a software property commit computational dualism — intelligence requires a whole-system embodied account