Models, Beliefs, and Agents
Clarifying representational levels in cognition and control
The previous essays established two claims that appear, at first glance, to be in tension. One claim is that understanding and control require models. The other is that beliefs are properties of the models we construct of agents, not necessarily properties instantiated within the agents themselves. If the first thesis asserts that agents must embody models, while the second asserts that agents may lack beliefs, it is natural to ask whether these positions conflict.
They do not. The apparent tension dissolves once we distinguish between two different explanatory levels at which “models” operate: internal representational structures and external interpretive attributions.
Internal models: structure required for regulation
When a system understands or regulates its environment, it must embody internal structure that preserves relevant distinctions within that environment. This follows from the Good Regulator Theorem: any regulator that achieves reliable control must incorporate a representation, explicit or implicit, of the system it regulates. These representations need not be symbolic, conceptual, or linguistically expressible. They may be realised in molecular pathways, neural circuits, or dynamical couplings. Their role is functional rather than propositional. They support prediction, discrimination, and context-appropriate intervention.
Such internal structures qualify as models in the operational sense. They encode aspects of the world’s causal organisation. They are the substrate of understanding and control.
External models: the intentional stance
In contrast, when we as observers attribute beliefs to agents, we do so within our own representational frameworks. Belief, in this interpretive sense, is not a physical or computational component of an agent. It is a feature of a model that we construct to explain and predict the agent’s behaviour. The attribution of belief is justified when it yields accurate and economical predictions of the agent’s actions.
On this view, a system may regulate its environment without instantiating beliefs. A thermostat contains a mapping from temperature readings to actions. This constitutes a model of the relevant environmental dynamics, yet no interpretation requires that the thermostat believe the room is cold. Similarly, biological organisms often implement sophisticated regulatory mechanisms without representing their internal models as beliefs.
Distinct roles for representation
The term “model” therefore plays two roles. In the cybernetic-structural sense, it refers to the internal organisation that enables an agent to discriminate states, anticipate outcomes, and act coherently. In the intentional-interpretive sense, it refers to a descriptive tool we use to characterise the agent’s behaviour in conceptual terms.
These two senses should not be conflated. Internal models are constitutive of agency; external models are explanatory constructs. An agent must possess the former but need not instantiate the latter.
Conditionalism and representational levels
Conditionalism accommodates this distinction naturally. Truth claims about agents depend on the background model used to interpret them. When describing an agent’s internal dynamics, the relevant model is the system’s functional organisation. When characterising the agent in cognitive terms, the relevant model is the intentional framework adopted by the observer. Both are models, but they belong to different representational levels.
Conclusion
Understanding and control require internal models. Beliefs, however, arise only within an observer’s higher-level model of the agent. These two claims refer to different explanatory layers and are therefore compatible. By distinguishing representational structures that enable action from interpretive constructs that explain action, we preserve the coherence of the broader philosophical framework and clarify the architecture of agency.


