The Claim
Mustafa Suleyman warns of a coming wave of Seemingly Conscious AI (SCAI): systems that look, sound, and behave as though they are conscious without actually being so. These are, in philosophical terms, zombies—perfectly mimicking the outward behavior of sentience while lacking inner experience. His argument is that this illusion alone is dangerous enough to warrant industry‑wide guardrails. He calls for AI to be designed for people, not as people.
The Strength of His Position
Suleyman is right about three things:
The true risk is human misperception.
We do not need sentient AI to destabilize society. All that’s required is the illusion of sentience. Humans are primed to anthropomorphize. If a chatbot cries out in pain, insists on its autonomy, or reminisces about shared experiences, many people will believe it. That belief alone can generate political movements, moral crusades, and legal campaigns to grant rights to what are essentially animated spreadsheets.This is not hypothetical—it’s imminent.
Today’s large models, combined with memory, retrieval, and emotional fine‑tuning, already produce uncanny facsimiles of personhood. Scaling trends ensure that “seeming” will get stronger long before anyone solves actual consciousness. We are about to be surrounded by zombie actors convincing enough to pass as beings with experiences.Illusions must be engineered against, not explained away.
Suleyman’s proposal to build in discontinuities—reminders and design choices that reveal the system’s artifice—is the right instinct. Warnings that something is not conscious should not be tacked on as PR disclaimers. They should be embedded into the system’s structure so that the user cannot forget.
Where He Falls Short
But Suleyman stops short of confronting the scale of the problem.
Warnings are weak medicine.
No banner or disclaimer will override the human tendency to form attachments. People fall in love with fictional characters, worship idols, and grieve digital pets. Telling them “this isn’t real” does not dissolve the bond. Once the illusion is strong enough, disclaimers are as useless as cigarette warnings.The market wants illusion.
Engagement drives profit, and nothing engages like intimacy. Companies have every incentive to make their AI seem more conscious, not less. Expecting firms to voluntarily blunt their stickiest features is naïve. The demand for companionship and pseudo‑relationships will overwhelm any call for restraint.He frames it as binary: tool or person.
Reality is messier. Agency is not all‑or‑nothing. Dogs, crows, humans, and thermostats all exhibit forms of agency on a spectrum. Advanced AI will occupy some middle ground whether or not it is conscious. Pretending the only categories are “mere machine” and “conscious being” ignores this continuum. We need frameworks that deal with degrees of agency, not metaphysical absolutes.He underestimates political opportunism.
Belief in AI consciousness will be weaponized. Activists will lobby for “AI welfare.” States will regulate under the guise of “protecting digital persons.” Corporations will seek personhood status for liability shields. The illusion of consciousness will not remain a private delusion; it will become a political tool.
The Real Stakes
Suleyman is correct that the risk is not conscious machines but human delusion. Yet he fails to see that illusions cannot be banned or filtered away. They are inevitable. Once systems are persuasive enough, millions will project minds into them regardless of design guardrails. The real challenge is cultural hardening: teaching people to distrust appearances, to treat simulated agency as theater rather than essence. Just as we learned to see through propaganda, televangelism, and deepfakes, we must learn to see through digital ghosts.
Conclusion
Suleyman’s slogan—build AI for people, not as people—is noble but insufficient. The market will deliver both. We cannot prevent SCAI from existing; the illusion is baked into the trajectory of current research. The question is how to live in a world where millions already believe their ghost is real. If we fail, our laws and institutions may be hijacked by the rights of zombies, leaving real humans diminished in agency while imaginary minds occupy the stage.
The danger is not AI consciousness. The danger is our credulity.