When Mirrors of the Mind was published, several likely objections became clear. Here are concise responses, ready for reference in discussion or as a follow-up post.
Objection 1: “But why this feel, rather than some other?”
Reply: The structure of qualia follows from the structure of the self-model. Red looks the way it does because the visual system evolved to partition inputs in that way for efficient discrimination. Pain feels the way it does because its function is to demand aversion. The “what-it-is-like” is not arbitrary; it is determined by how the model encodes information for action.
Objection 2: “Who is the subject that experiences the model outputs?”
Reply: There is no inner homunculus. The self-model itself is the subject. Asking “who” experiences qualia is like asking “where” computation really happens. The perspective is built into the model’s operation.
Objection 3: “But isn’t consciousness non-reducible, fundamentally different from computation?”
Reply: Computation is substrate-independent. Consciousness is, too. In both cases, demanding a deeper ontological “stuff” is a mistake. They are informational processes, not metaphysical substances.
Objection 4: “Your theory is unfalsifiable.”
Reply: It makes testable predictions: alter or impair the self-model and phenomenology changes. Depersonalization, anosognosia, and body-schema disturbances already support this. The richer the self-model, the richer the reported qualia.
Objection 5: “Wouldn’t this mean AI can be conscious?”
Reply: Yes—if an AI builds and uses a generative self-model to regulate its own actions, it would meet the criteria. The question is not biology versus silicon, but whether agency requires and runs a self-model.
Objection 6: “Isn’t this just illusionism in disguise?”
Reply: Not quite. Illusionism says consciousness doesn’t exist, only the illusion of it. AMT says consciousness is real, but what it is is the operation of an agent’s self-model. The “hard problem” is the illusion, not consciousness itself.
Closing Note
The Agency-Model Theory doesn’t deny subjectivity—it explains it. Experience is what it looks like from the inside when an agent models itself. The “hard problem” dissolves once we stop chasing a metaphysical ghost.