In our ongoing exploration of Bayesian epistemology and decision theory, a critical conceptual distinction has emerged, deserving explicit emphasis: the difference between Measure and Credence. Both follow identical mathematical frameworks—probability axioms and Bayesian updating—but differ fundamentally in interpretation.
Measure represents an objective, physical probability embedded in the structure of reality, particularly within frameworks such as the Quantum Branching Universe (QBU). It exists independently of human observers or beliefs, reflecting the intrinsic probabilities determined by physical laws and quantum mechanics.
Credence, in contrast, is entirely subjective. It quantifies epistemic uncertainty—our rational assessment of how confident we are given incomplete knowledge. Credence extends well beyond empirical contexts to include uncertainty about theories, logical propositions, conceptual frameworks, and models themselves.
This distinction matters profoundly. When we assign Credences to scientific theories, we do not imply these theories possess partial correctness in some objective sense. Rather, we're explicitly quantifying our rational uncertainty about which theories accurately reflect reality. Confusion arises precisely when Credence is mistakenly treated as if it were an intrinsic probability belonging to theories themselves—an error critiqued effectively by Deutsch and Hall.
Explicitly distinguishing Measure and Credence helps clarify our thinking, reinforces our commitment to Conditionalism (the notion that truth claims only have meaning within specific interpretative frameworks), and ensures our decision-making frameworks, such as Effective Decision Theory (EDT), employ probabilities coherently.
In short: probabilities always follow mathematical rules, but recognizing whether we speak of objective Measure or subjective Credence dramatically influences how we reason, decide, and philosophically interpret the world.
As we proceed, let's maintain clarity about these vital distinctions.