From Inference to Interpretation
Why AI Doesn’t Know What It Doesn’t Know
Prof. Lee Cronin recently wrote:
People who think AI can map an unknown space don’t really understand what AI is.
It’s a sharp remark, but behind it lies a deep epistemological distinction: the difference between interpolation and exploration.
1. The Known Within the Known
Contemporary AI systems, whether large language models or reinforcement learners, operate within predefined manifolds of possibility. They do not traverse the truly unknown; they compress, correlate, and predict within distributions already delineated by prior data or by human-specified reward functions. Their power lies in interpolation — filling in the gaps between known examples with staggering fluency.
Even when they appear to explore, they are merely moving within the latent geometry of an already-mapped domain. A generative model doesn’t discover new laws of nature; it draws novel samples from a space whose axes were defined during training. To map the genuinely unknown, one must first invent a new coordinate system.
2. The Nature of the Unknown
An unknown space is not merely a region without data; it is a region where the criteria for what counts as data are themselves undefined. To explore it requires more than gradient descent — it requires epistemic creativity: the ability to form new hypotheses, define new reward functions, and even construct new ontologies.
Cronin’s perspective is grounded in his work on the origin of life. In chemistry, an “unknown space” might mean a vast combinatorial landscape of molecules with no guiding schema for what constitutes ‘interesting.’ AI cannot navigate that without prior human framing. It can optimize, but not yet originate.
3. The Frontier of Autonomy
Still, Cronin’s statement is not absolutely true. There exist early forms of exploratory AI: curiosity-driven agents, Bayesian optimizers, and open-ended evolution systems that iteratively expand their search domains. These systems don’t begin with a full map; they construct partial ones by interacting with the world. Yet even they rely on human-defined meta-objectives — a scaffolding of meaning.
To truly map the unknown requires the ability to revise one’s own epistemic framework, to detect that one’s current ontology is inadequate, and to generate a new one. That is the threshold between mere intelligence and genuine agency.
4. The Core Insight
Cronin’s remark, restated with precision, might read:
AI cannot map an unknown space without an interpretive framework supplied by an agent.
This is not a limitation of computation per se, but of interpretation. AI as it stands is an engine of inference, not of understanding. The unknown cannot be mapped from within a fixed model; it demands a system that can mutate its own semantics.
5. On AI and Agency
The statement above describes what AI is now, not what it could become. Present systems are tools of inference, not agents of interpretation. They operate within human-defined ontologies: architectures, reward functions, and vocabularies. Their “choices” are optimizations, not autonomous commitments.
However, an AI could in principle become an agent — if it developed the capacity to:
recognize when its ontology fails to account for new phenomena,
invent new representational primitives to describe those anomalies,
and revise its own goals rather than merely its parameters.
Such a system would cross the threshold into sagency — the domain of self-revising, epistemically creative intelligence. It would not merely learn within a model; it would learn how to model. Only then could AI genuinely map the unknown.


