Understanding Requires Models
Representation, credence, and the architecture of scientific explanation
A recurring theme in both scientific practice and philosophy is the recognition that certainty is not achievable in empirical inquiry. The aspiration for foundational, irrefutable knowledge—familiar from classical rationalism—does not align with how scientific understanding actually progresses. A recent conversation between Sean Carroll and Andrew Jaffe articulates the point in a particularly clear way: all empirical knowledge is mediated by models.
The point can be stated more precisely: cognition and scientific reasoning proceed through representational structures that mediate all contact with the world. Our access is never immediate; it is filtered through models that organize sensory input, impose explanatory structure, and support prediction. Scientific models formalize this process explicitly, yet the underlying cognitive architecture is continuous with ordinary reasoning.
Understanding as model-based cognition
Jaffe emphasizes that without models, no form of cognition—scientific or otherwise—could function. Human beings, from infancy onward, rely on structured expectations about their environment. Infants form rudimentary causal expectations; children revise internal representations through interaction; adults interpret social behavior through predictive heuristics. In each case, the agent interacts not with an unmediated external reality, but with a structured representation of it.
Scientific models extend this process. Newtonian mechanics provides a mathematical account of motion in a particular regime. General relativity offers a deeper geometric model that supersedes Newton’s in domains where curvature and high velocities are relevant. Cosmological analyses, such as those of the cosmic microwave background, rely on models of early-universe processes to extract parameters of interest. In all cases, the observations are interpreted through a structured framework; there is no model-independent interpretation available.
Conditionalism, as articulated in Axio, captures this structure formally: every empirical claim is implicitly conditional on background assumptions. A statement such as “the expansion rate of the universe is H” is shorthand for a conditional statement: if the data, the modeling assumptions, and the parameterization of cosmology are accepted, then the posterior estimate for H takes a specific form.
Models as structured simplifications
Jaffe’s analogy with the London Underground map illustrates the principle. The Tube map omits geographic distance and orientation while preserving the structural relations that matter for navigation. Scientific models function similarly: they preserve the invariants relevant for a domain while simplifying or omitting irrelevant detail.
Newtonian mechanics omits relativistic corrections and quantum effects, yet accurately captures dynamics for most everyday conditions. General relativity discards Newton’s force-based description in favor of a geometric account that better fits observations at larger scales and higher precision. In each case, the model is defined by the choices of what to preserve and what to approximate.
Thus, “the map is not the territory” is less informative than the more precise point: the adequacy of a model is domain-specific, determined by which structural features it preserves and which it intentionally distorts.
Deduction, abduction, induction, and probabilistic inference
Carroll and Jaffe distinguish deduction from induction. Deductive reasoning yields necessary conclusions from stipulated premises; it explores the logical consequences of assumptions already in place. Yet much of scientific reasoning does not begin with established models. The generation of candidate explanations typically proceeds by abduction—inference to the structure that would render observations intelligible. Abduction supplies the conceptual possibilities; deduction develops their implications; induction evaluates how well these implications align with observation.
Scientific inference is therefore a matter of updating credences—degrees of belief—rather than measuring any objective feature of the world. Competing models make different predictions; observations shift the relative probabilities of these models. The inverse-square law of gravity is not proven in a deductive sense; it is strongly supported because it renders observed motions highly probable, whereas alternative models render them improbable. General relativity is preferred over Newtonian gravity because it more accurately predicts phenomena such as Mercury’s perihelion shift and gravitational lensing.
This structure aligns with Conditionalism: truth claims in empirical domains amount to coherence within a model that successfully organizes and predicts experience.
Probability as a framework for model evaluation
Jaffe presents a Bayesian perspective: probabilities quantify rational degrees of belief given a model and background information. Statements such as “the Hubble constant is 67 ± 3 km/s/Mpc” summarize a posterior distribution conditioned on a cosmological model, observed data, and assumptions about measurement uncertainties.
Frequentist error bars can be mathematically correct yet conceptually indirect, since they characterize the long-run behavior of hypothetical repeated experiments rather than providing a direct statement about parameter values. In contrast, Bayesian inference answers the question researchers actually pose: given a model and data, what parameter values should we consider plausible?
In Axio terminology, probability is the quantitative expression of conditional truth: it reflects how an agent distributes credence across competing models, rather than any objective quantum Measure.
Coarse-graining, entropy, and accessible information
Statistical mechanics offers another example of model-dependence. A gas contains an astronomical number of degrees of freedom. The microstate evolves deterministically, but is computationally inaccessible. Instead, we work with coarse-grained macrostate descriptions—temperature, pressure, density—that define a probability distribution over microstates. Thermodynamic laws, including the increase of entropy, apply to these coarse-grained descriptions.
Entropy thus encodes the limitations of the model: how much work can be extracted given the information available at the chosen level of description. If finer-grained information were available, more work could be extracted. The second law is therefore better understood as a constraint that arises from the chosen level of description.
Quantum mechanics and the role of interpretations
Quantum theory integrates probability at a fundamental level. The wavefunction encodes probability amplitudes for measurement outcomes. Interpretations of quantum mechanics—Copenhagen, Many Worlds, QBism—function as meta-models that connect the formalism to an ontology.
Although these models differ in their conceptual commitments, they agree on operational predictions (e.g., the Born rule). From a Conditionalist perspective, these interpretive differences reflect different modeling choices for organizing the same probabilistic structure. Understanding quantum mechanics therefore involves clarifying how the formal model relates to measurement and expectation, rather than identifying an unmediated ontological substrate.
Cosmology as model-dependent inference
Jaffe’s work in cosmology illustrates the unavoidable dependence on models. Cosmological parameters are inferred from observational data—such as the CMB—via models of early-universe dynamics, matter content, and statistical structure. The power spectrum is computed under model assumptions and compared to observational data; parameter estimates depend on these assumptions.
The “Hubble tension,” the discrepancy between CMB-inferred expansion rates and local distance-ladder measurements, is fundamentally a disagreement between models. Each analysis embeds assumptions about astrophysics, instrument calibration, cosmology, and priors. Resolving the tension requires identifying which modeling assumptions are inadequate.
Even discussions of the multiverse and anthropic selection can be understood as debates about model classes and measures over parameter spaces, rather than as direct claims about unobservable realities.
Understanding as model construction
Across these domains, a consistent structure appears: cognition, scientific reasoning, and interpretation all operate through model construction. There is no direct access to reality independent of representational frameworks. What we call understanding is the capacity to formulate models that successfully predict, integrate, and explain observations.
In this sense, the model is not merely a tool for representing the world. It is the framework within which the concept of “the world” becomes intelligible. Conditionalism captures this explicitly: empirical truth is conditional coherence. Scientific progress consists in constructing and refining models whose coherence with experience surpasses that of their predecessors.
Understanding, therefore, consists in constructing and maintaining models that organize experience effectively. The model provides the structure within which meaning and explanation arise.


