Psychology and Social Cognition Design & LLM Interaction Agentic and Multi-Agent Systems

Does software intelligence exist independent of hardware and environment?

Most AGI formalisms (Legg-Hutter, Chollet) treat intelligence as a software property measurable in isolation. But can we really evaluate intelligence without considering the physical system and the evaluator making the judgment?

Note · 2026-02-21 · sourced from Philosophy Subjectivity
What kind of thing is an LLM really? How should researchers navigate LLM reasoning research?

Most influential AGI formalism treats intelligence as a property of software, evaluated in terms of that software's capacity to generalize and acquire new skills. Legg-Hutter: intelligence is the ability to satisfy goals in a wide range of environments. Chollet: intelligence measures skill-acquisition efficiency, quantified using Kolmogorov complexity to reward simple, general solutions.

The What the F*ck Is AGI paper identifies the common error: these formalisms measure f1 (software) independently of f2 (hardware/embodiment) and f3 (environment, including evaluators). But success is determined by f3(f2(f1)) — the behavior of the whole system. This means:

This is computational dualism — the AI equivalent of Cartesian substance dualism, which Descartes resolved via the pineal gland. AI researchers have exchanged the pineal gland for a Turing machine: a magic interface between mind (software) and body (hardware). The exchange doesn't resolve the problem; it relocates it.

Wang's alternative — intelligence as "adaptation with limited resources" — avoids dualism by making intelligence a relational property of the whole system: adaptation requires resources that are constrained by embodiment, and success is determined by the environment. This is formally consistent with the whole-system account f3(f2(f1)).

The implication for AGI claims: a demonstration of intelligence by f1 alone (software on a benchmark) is not a demonstration of AGI. AGI would require generalization across embodied, resource-constrained, environmentally-embedded settings. Current evaluations are, at best, measuring a component.

The formal argument elaborated: "Assume C is a space of software programs, and Γ is a space of behaviours. Imagine f1 ∈ C is AI software, f2 : C → Γ is the hardware on which it runs, and f3 : Γ → {0, 1} is the environment (including me) where success is decided. Success is a matter of f3(f2(f1)). The behaviour of f3(f2(f1)) can be changed by changing f2 or f3. It is pointless to make claims about f3(f2(f1)) based on f1 alone. f1 and f2 are like mind and body." Both Legg-Hutter and Chollet definitions share the same vulnerability: they use Kolmogorov complexity, equating simplicity with generality, and they are "highly subjective because they treat intelligence as a property of software interacting with the world through an interpreter." The proposed alternative treats an AGI as "a system that adapts at least as generally as a human scientist" — requiring autonomy, agency, motives, causal learning, and exploration-exploitation balance, all of which are inherently whole-system properties.


Source: Philosophy Subjectivity

Related concepts in this collection

Concept map
14 direct connections · 115 in 2-hop network ·medium cluster

Click a node to walk · click center to open · click Open full network for a force-directed map

your link semantically near linked from elsewhere
Original note title

agi definitions that treat intelligence as a software property commit computational dualism — intelligence requires a whole-system embodied account