A short note on intelligence: Human (HI) and Artificial (AI)
In the developing section, looking at the nature of HI and AI, there is a set of interactive graphics, including this one about picking up a cup of coffee. I don’t have a video of picking up a coffee cup, but this Orange Peel movie illustrates active inference in action. You will see in the slow motion the inferential testing and modelling process in action in the reach-grasp sequences. Keep watching, there are two hand to peel movemnts. The hand belongs to a woman called Sara.
The human intelligence system does not merely process inputs and produce outputs; it initiates. Volition -- the capacity to act from internal states, to pursue goals not determined by the immediate stimulus, to delay, redirect, or refuse response -- is a property of biological intelligence that current AI systems do not possess in a structurally equivalent sense. A human intelligence is never merely waiting to be invoked; it pursues ends of its own, with biological, social, and biographical outcomes at stake. The phrase 'at the moment' is deliberately placed: this paper does not treat the absence of agency in current AI systems as a permanent architectural constraint, and the question of what conditions would be required for genuinely agentic AI is addressed in the sections on AGI. For the analysis of current systems, the distinction is real and should not be dissolved by the loose use of the word ‘intelligence’.
These different intelligence regimes share a computational family resemblance; a common ground that makes their comparison possible. Both use network architecture, and both use networks as the basis of calculation. But resemblance at the level of mechanism does not entail equivalence, and establishing precisely where the architectures align and where they fundamentally part company requires a principled analytical framework. This paper takes a process view - and a mathematical one. The framework it uses is active inference, developed by Karl Friston and colleagues from variational calculus and Bayesian probability theory. Active inference proposes that any self-organising system capable of persisting over time can be understood as continuously minimising the gap between what it predicts about its sensory inputs and what it actually receives - a process of continuous model-building and model-updating within the perception-action cycle. It is used here not simply because it is available, but because it allows biological organisms, engineered systems, and hybrid combinations of the two to be held in the same analytical space while preserving the structural differences between them. A fuller treatment of the framework, its mathematical foundations, and its explanatory scope is given in section 7; what matters here is that it provides a vocabulary precise enough to compare human and machine intelligence without collapsing one into the other.[1]
[1] Karl Friston, 'The free-energy principle: a unified brain theory?', Nature Reviews Neuroscience, 11:2 (2010), pp. 127–138; Karl Friston et al., 'Active inference: a process theory', Neural Computation, 29:1 (2017), pp. 1–49.
Driven by curiosity and built on purpose, this is where bold thinking meets thoughtful execution. Let’s create something meaningful together.