General Intelligence Is Not an Illusion
It Only Disappears Under a Broken Reference Class
1. The Provocation
Yann LeCun has argued that general intelligence is an illusion: humans appear general only because we cannot imagine the problems to which we are blind. Human cognition, on this view, is merely a bundle of highly specialized competencies optimized for a narrow ecological niche. The implication is that “general intelligence” names no real property, only anthropomorphic overreach.
Taken literally, this position appears to dissolve not only AGI ambitions but the very coherence of intelligence as a graded property. The question is whether this dissolution reflects a deep insight or a category mistake.
2. The Hidden Quantifier
LeCun’s conclusion becomes valid only if “general intelligence” is implicitly quantified over:
the set of all possible problems across all possible physical universes
Relative to that reference class, no finite agent can be general. Every learner is parochial; every representation is contingent; every inductive bias fails somewhere. But this result is trivial. It follows directly from the overbreadth of the quantifier, not from any empirical fact about minds.
This move mirrors familiar pseudo‑refutations:
“There is no general‑purpose computer, because it would not compute functions defined in non‑computable physics.”
“There is no general language, because some concepts are inexpressible under alien semantics.”
All are true under maximal quantification. All are conceptually sterile.
A narrower objection deserves separate treatment. A charitable skeptic need not invoke all possible universes. One can instead argue that even within our physics, the space of possible tasks is so vast that humans occupy only a tiny manifold within it. On this view, apparent generality reflects overlapping biological priors—spatial reasoning, social cognition, tool use—that happen to span many human‑relevant problems, while remaining fundamentally parochial. The claim is not that general intelligence is logically impossible, but that human intelligence is merely broadly specialized rather than genuinely general.
This objection shifts the debate from impossible universality to empirical scope. It therefore cannot be dismissed by quantifier analysis alone. However, it fails for a different reason: it treats the width of a task manifold as decisive, while ignoring the mechanism by which that manifold is extended.
3. Why the Reference Class Matters
Functional concepts only retain meaning when scoped to constraints:
A fixed physics
A computability regime
An interaction channel
A resource bound
Once those are stripped away, every notion of capability collapses. Intelligence is not unique in this respect. The correct question is never whether an agent is general simpliciter, but whether it is general relative to:
open‑ended, previously unencountered problem distributions within a given universe
That is the reference class under which intelligence evolved, is exercised, and is evaluated.
4. Universality vs. Generality
The core conceptual error is a conflation of two distinct ideas:
Universality: competence across all possible tasks or domains
Generality: the capacity to acquire competence in new domains through learning, abstraction, and transfer
Universality is incoherent for bounded agents. Generality is not. Humans lack the former and demonstrably possess the latter.
Evidence is not subtle:
Mathematics formalizes structures evolution never selected for
Science constructs models far outside perceptual scales
Technology externalizes and amplifies cognition
Philosophy revises its own conceptual foundations
These are not pre‑wired specializations. They are manifestations of interpretive flexibility.
The relevant distinction is not between wide and narrow manifolds, but between static manifolds and self‑extending ones. A system confined to a fixed representational basis may interpolate impressively within its manifold while remaining specialized in the only sense that matters. By contrast, a system capable of revising its own representational basis is not confined to a manifold in the same way. Its apparent “niche” is not a region of task space but a method for constructing new regions.
Under this distinction, the claim that humans occupy a narrow manifold becomes largely irrelevant. What matters is that humans can reinterpret failures as evidence that the model itself is wrong, triggering ontological revision rather than local adjustment. That capacity breaks the inference from “biological priors” to “mere specialization.”
5. General Intelligence as Interpretive Capacity
Once stripped of universality assumptions, general intelligence reduces to a small set of structural properties:
Ability to construct and revise world models under error
Ability to transfer structure across domains
Ability to reinterpret goals and representations under ontological shift
Ability to preserve coherence while changing internal semantics
This is not task coverage. It is interpretation‑preserving adaptability.
The difference between specialization and generality is visible in failure modes. A system that interpolates within a fixed ontology responds to error by adjusting parameters while preserving meaning. A system with interpretive capacity responds by questioning whether the task, representation, or goal has been correctly understood at all.
A calculator trained on arithmetic can execute operations flawlessly, yet when exposed to Gödel numbering or diagonalization, it does not reinterpret what a “number” is—it fails. A human encountering the same construction revises the ontology of number itself. The error is not local; it propagates upward, forcing a change in representation. This is not extrapolation from primate priors. It is model reconstruction under semantic error.
Claims that mathematics, science, or philosophy merely stretch pre‑existing cognitive machinery miss the crucial point. Stretching presupposes a fixed basis. Human cognition routinely abandons the basis altogether—externalizing reasoning into formal systems, symbols, proofs, and tools that operate independently of the cognitive intuitions that inspired them. The training distribution is not escaped by brute extrapolation but by offloading cognition into structures whose correctness no longer depends on human intuition at all.
An agent that can do this—constructing external models that outstrip its native priors while preserving semantic coherence—is not merely encountering unusual inputs. It is altering the space in which problems are posed.
Under this framing, human intelligence is plainly general: bounded, embodied, uneven, yet capable of operating far outside its original training distribution.
6. The Irony
The very act of denying general intelligence in this way presupposes it. Abstracting over definitions, questioning hidden assumptions, and reasoning about counterfactual universes are not narrow skills selected for in the Pleistocene. They are meta‑cognitive operations that exemplify generality.
If such operations do not qualify, the term “intelligence” loses all discriminative power.
7. Conclusion
LeCun’s claim succeeds only by expanding the reference class until every finite concept fails. Under that expansion, the non‑existence of general intelligence is guaranteed and uninformative.
Once the quantifier is restored to any actual universe, general intelligence reappears immediately: not as universality, but as graded, conditional, interpretive agency.
Requiring intelligence to be invariant under arbitrary changes in physics misconstrues what generality can coherently mean. All functional notions are conditional: general‑purpose computers are general relative to computability; general learning algorithms are general relative to data‑generating processes. Intelligence is no exception. Conditioning on a fixed physics is not overfitting—it is the minimal requirement for meaning. Demanding robustness across arbitrary physical laws is equivalent to demanding language that survives arbitrary changes in meaning.
The illusion, if there is one, lies not in human generality but in mistaking a tautology for a theory.



