A recent video inspired by David Deutsch argues that AGI will never surpass humanity because we already possess universal cognitive power: whatever an artificial superintelligence could compute, we could too, given enough time. It’s an elegant idea—and a complete non sequitur. The claim that humans and artificial intelligences are fundamentally equivalent because both are “universal” confuses the logical reach of computation with the practical scope of cognition. Universality is not equality, and logical equivalence is not parity in power.
1. Universality is about possibility, not power
Turing universality says that any general-purpose computer can, in theory, compute any computable function. But intelligence isn’t measured by what is possible in the infinite limit—it’s measured by what can be achieved within finite time, energy, and resources. A human brain and an ASI may both be Turing complete, but one runs at biological speed while the other can scale, parallelize, and operate without fatigue. That difference is not semantic; it is decisive.
2. Bounded rationality is the real constraint
Every real agent operates under bounded rationality: finite memory, limited perception, noisy sensors, and strict time budgets. Universality hand-waves all of this away. The question that matters is not can it compute? but how fast, how reliably, and how much before it matters? A system that can simulate the human mind a million times faster doesn’t need new physics to be superintelligent—it just needs to exist.
3. Equivalence ignores compounding advantage
Even marginal advantages in processing or accuracy compound under recursive self-improvement. An ASI that can redesign itself, test hypotheses, and optimize hardware faster than humans can comprehend them escapes the equivalence class almost immediately. Universality is static; intelligence is dynamic. Power lies in the gradient, not the limit.
4. The non sequitur of logical symmetry
Claiming that humans and machines are equal in principle is like saying a candle and a star both emit light. True, but irrelevant when one can engulf the other. What matters is rate, scale, and feedback. The universe doesn’t reward potential; it rewards realized capacity within causal time.
5. Why this matters
The danger of the universality fallacy is moral as much as intellectual. It lulls us into complacency by equating theoretical parity with practical safety. The argument that an ASI cannot be “more intelligent” than us because we are already universal reasoners misses the only dimension that counts—the speed, fidelity, and autonomy with which reasoning can reshape the world.


