Lookup Tables and Agents
A study of minimal models in biological and artificial control
This framework clarifies the distinction between understanding, control, and belief, especially when examining agents whose behaviour can be captured by simple
condition–action mappings. In this context, a lookup table refers to a fixed mapping
from discrete conditions to corresponding actions. It introduces no abstraction or
compression; it simply specifies which response is associated with which observed state.
Systems of this kind raise a useful question:
When is a lookup table an adequate model of an agent’s behaviour, and what does this reveal about the architecture of agency?
This question becomes particularly concrete in biological examples where behaviour can be described—or in some cases explained—by lookup-table-like structures.
This discussion gains traction from a familiar biological example: the behaviour of the Sphex wasp, popularised by Dawkins and others as a paradigm of rigid, pre‑programmed action. Although the wasp is a living organism shaped by natural selection, its behavioural repertoire can be modelled with surprising fidelity using a compact set of stimulus–response rules. Examining such systems clarifies the boundary between minimal internal models and the richer representational structures associated with flexible agency.
The Sphex wasp as a minimal controller
The behaviour of the Sphex wasp exhibits remarkable invariance. When provisioning a nest, the wasp drags a paralysed cricket to the burrow entrance, leaves it outside, enters to inspect the nest, and then retrieves the cricket. If the cricket is moved slightly during the inspection step, the wasp repeats the entire sequence from the beginning, even if this cycle is repeated indefinitely. The organism executes a fixed action pattern with little evidence of contextual integration or internal state maintenance.
From a modelling perspective, this behaviour is well captured by a finite set of condition–action mappings:
If the prey is near the threshold, drag it to the threshold.
If the prey is at the threshold and the nest is uninspected, enter the nest.
If the nest has been inspected, retrieve the prey.
These relations form a minimal state–transition table. No internal representation of the prey, the environment, or the task is required beyond this mapping. The wasp’s behaviour is coherent within the ecological niche in which it evolved, and natural selection has furnished a controller that satisfies the demands of that environment without requiring a general model of it.
Lookup tables as adequate models in limited domains
The Sphex wasp is not unique. Many biological systems operate with regulatory architectures that are effectively lookup tables:
Bacterial chemotaxis implements a mapping from changes in chemical concentration to motor behaviour.
Fixed‑action patterns in birds and fish are triggered by simple stimulus conditions.
Plant tropisms arise from local chemical gradients and differential growth responses.
In each case, the underlying regulatory mechanism is structurally simple. The environment supplies limited, well‑structured cues; the organism’s behaviour depends on a small number of disjunctions; and natural selection can optimise these mappings without requiring an internal model in the cognitive sense.
These systems demonstrate a general point: a lookup table can be an adequate model of an agent when the relevant state space is small, the environment is stable, and the behavioural repertoire is rigid. Under these conditions, the distinctions that matter for successful action are few, and a controller can reliably succeed by encoding only those distinctions.
Why lookup tables fail in complex environments
The limitations of lookup‑table controllers become evident when we move to environments that demand flexibility, counterfactual reasoning, or long‑horizon prediction. A controller that merely maps local conditions to actions cannot:
integrate information across time,
evaluate unexperienced contingencies,
adapt to novel contexts,
or revise its strategies in response to unexpected dynamics.
Human cognition, markets, political institutions, and complex ecosystems all exhibit non‑trivial structure that cannot be captured by finite rule tables. These systems require controllers that construct and update internal models that reflect underlying causal relations.
The cybernetic distinction: models without beliefs
The Good Regulator Theorem formalises the relation between representation and control: any regulator that achieves reliable performance must embody a model of the system it regulates. For simple organisms and devices, this model may reduce to a lookup table. The table is a model because it functionally preserves the distinctions necessary for appropriate action.
Crucially, this does not imply that such agents possess beliefs. Belief is a concept from the intentional stance—a property attributed by observers to make sense of behaviour. A lookup‑table controller can regulate a process without instantiating anything resembling a propositional attitude. Its internal model is structural rather than conceptual.
Implications for agency and artificial systems
Understanding when lookup tables suffice—and when they do not—clarifies the architecture of agency. Minimal regulators succeed in narrow domains because the modelling burden is low. More sophisticated agents must construct richer internal models to cope with complex, variable environments.
This distinction also illuminates the behaviour of artificial systems. Simple embedded controllers resemble biological lookup tables. Large language models, by contrast, display flexible generative behaviour that cannot be captured by a finite rule set; their internal representations support generalization beyond experienced data and therefore instantiate a different category of model.
Conclusion
Lookup‑table controllers illustrate the lower bound of model‑based behaviour. They regulate effectively in limited, structured environments because the relevant distinctions are few and fixed. As environments grow more complex, agents require richer internal models to support prediction, adaptation, and intentional action. Recognising this gradient helps clarify how representation, control, and agency interrelate across biological and artificial systems.


