From OODA to Active Inference: Rethinking Safety in an Entropic World
In this guest contribution, Dr. David Slater unifies Boyd’s OODA "loop" and Friston’s Free Energy Principle to reframe safety, intelligence, and organizational adaptability.
INTRODUCTION: A STRATEGIC INVITATION TO THE EDGE OF ORDER
The paper you’re about to read didn’t arrive by accident. It landed in our inbox hours after recording our No Way Outpodcast—exactly when it needed to. Dr. David Slater’s Entropy, Equilibrium, and the Evolution of Control isn’t just a contribution to the safety field—it’s a signal flare for thinkers operating at the bleeding edge of complexity, cognition, and command.
We’re grateful to Dr. Slater for granting us permission to republish his work in full. And we’re republishing it here on The Whirl of Reorientation because it does what most writing on safety, strategy, and systems fails to do: it fuses the physical with the cognitive, and in doing so, reframes survival itself—not as protection from disorder, but as the capacity to dance with it.
Slater doesn’t just update the field of safety science—he detonates its outdated assumptions. His synthesis of thermodynamics, Friston’s Free Energy Principle, chaos theory, and Boyd’s Destruction and Creation is not academic ornamentation. It’s a blueprint for cognitive maneuver in hostile, high-entropy environments. His thesis demands that we stop treating safety as a compliance regime or a linear checklist. Instead, he shows us that safety is what life does—an ongoing act of adaptive equilibrium-seeking in a universe that promises decay.
This is where most practitioners get Boyd wrong. For years, the OODA “loop” has been flattened into a four-step flowchart for decision-making. That caricature neutered its strategic utility. But Slater restores the original voltage. He shows us that the OODA “loop” is not a process—it’s a living system. One that pulses with the same recursive, predictive logic that governs our own physiology. One that mirrors Friston’s FEP at the level of neurons, organisms, and networks.
The takeaway? The real OODA “loop” is not a way to “make decisions.” It’s how adaptive intelligences persist.
This is not another “Safety III” rebrand. It’s a hard reset. A refusal to cling to linear frameworks in a world shaped by nonlinear feedback, black swans, and cognitive overload. If you’re still managing risk the same way you did a decade ago, you’re not managing risk—you’re accelerating collapse.
And here’s the deeper cut: as Active Inference AI and spatial computing emerge from the lab and into operational systems, the difference between human and machine cognition will no longer be philosophical. It will be strategic.Understanding how LLMs and autonomous systems navigate uncertainty requires the same principles that govern your own nervous system. The only question is whether your models are brittle—or built to adapt.
Slater’s work is an inflection point. For safety professionals, yes—but also for strategists, executives, designers, and anyone tasked with leading through disorder. He offers a map for surviving the future by orienting more precisely inside it.
This is the kind of work we stand behind at The Whirl of Reorientation—not because it gives us answers, but because it changes the questions.
Welcome to the edge.
ENTROPY, EQUILIBRIUM, AND THE EVOLUTION OF CONTROL: FROM COSMIC DISORDER TO COGNITIVE ADAPTATION
By: David Slater, PhD.
ABSTRACT
This thesis proposes a unified systems view of life, intelligence, and safety as evolutionary responses to entropy. Drawing on thermodynamics, neuroscience, chaos theory, and cognitive strategy, it argues that life emerged as a dynamic mechanism for maintaining quasi-equilibrium states in an entropic universe. From the Big Bang to biological evolution and human cognition, systems increasingly evolved to buffer against disorder by constructing predictive internal models. This culminates in the human brain—a multi-layered inference engine capable of not only resisting entropy physiologically but modeling and anticipating future states. Using Karl Friston’s Free Energy Principle, John Boyd’s Destruction and Creation (D&C) loop, and the metaphor of the risk thermostat, the thesis shows how cognition evolved not just to react, but to strategically revise models in the face of uncertainty. Chaos theory adds that even in disruption, systems can stabilize around strange attractors, providing a theoretical basis for the emergence of new stable states through adaptive mutation and innovation. Conscious decision-making represents the latest adaptation in an ancient trajectory: the recursive regulation of instability. Ultimately, safety, intelligence, and survival are framed not as fixed states but as emergent, recursive acts of equilibrium-seeking in an entropic world.
Key words - Decisions, Safety, Entropy, Evolution, Survival
INTRODUCTION
From the origins of the universe to the functioning of the human brain, the arc of complexity is deeply shaped by the relentless pressure of entropy. Thermodynamics teaches us that systems move irreversibly toward disorder. And yet, the history of matter, life, and mind seems to resist this mandate. Stars ignite, planets coalesce, cells self-organize, and organisms adapt. This progression defies entropy locally even as it respects it globally, revealing a crucial insight: order can emerge not in defiance of entropy, but through its redirection. Each layer of increasing complexity represents a momentary, metastable equilibrium—a fragile plateau in a landscape of decay.
In this narrative, life appears as an entropic anomaly: a system that maintains its internal structure not by escaping entropy, but by navigating it. Living organisms do this by harnessing free energy and engaging in cycles of sensing, acting, and adapting. As evolution progressed, brains emerged as thermodynamic regulators, capable of modeling uncertainty and minimizing surprise. This culminates in the human mind—a nested system of homeostatic loops, now capable of reflecting on its own regulation, of imagining dangers that do not yet exist, and of deliberately engineering new attractor states.
Weaving together ideas from thermodynamics, chaos theory, systems theory, neuroscience, and strategic cognition, this thesis presents a framework for understanding how entropy and information processing co-evolved. In doing so, it positions safety, intelligence, and survival not as fixed traits or outcomes, but as emergent capacities for equilibrium-seeking in an unstable universe.
1. ENTROPY, FREE ENERGY, AND QUASI-EQUILIBRIUM
The second law of thermodynamics asserts that in any closed system, entropy—the measure of disorder—will increase over time. Yet this law does not prohibit the emergence of localized order. Rather, it sets the conditions under which such order must extract energy from its environment to sustain itself. This is the foundation for all self-organizing systems.
From atoms to galaxies, systems evolve through what can be described as a cascade of quasi- equilibrium states—transiently stable configurations that resist disorder temporarily by reducing their free energy. The Gibbs Free Energy equation (G = H - TS) defines the conditions under which systems naturally tend to move toward states where usable energy is minimized, often producing structured patterns or gradients in the process.
This principle underlies everything from the crystallization of minerals to the folding of proteins and the emergence of metabolic cycles. Complexity arises not in spite of entropy, but by flowing through its gradients. Each increase in organization is purchased at the cost of greater global entropy. Thus, life does not defy thermodynamics; it accelerates it.
Here, chaos theory introduces an important refinement. While deterministic systems are often assumed to evolve predictably, chaos theory shows that even simple rules can produce highly complex, sensitive, and unpredictable behaviour. Yet, even within this apparent randomness, systems tend to settle into patterns known as chaos attractors or strange attractors. These are structured, bounded regions of phase space that systems inhabit after disruption—an expression of order emerging from turbulence. The Lorenz attractor, describing weather patterns through convective, spatial, and temporal instabilities, exemplifies this idea. When systems are knocked out of equilibrium, they may appear chaotic but can stabilize into new, coherent regimes. In adaptive systems, the ability to "find" these attractors through innovation, disruption, or even pseudorandom perturbation is the secret to continued survival.
2. LIFE AS A REGULATORY RESPONSE TO ENTROPY
In biological systems, the management of entropy becomes an existential imperative. Life is defined not by static forms, but by dynamic processes that actively resist disintegration. Living organisms regulate internal states by drawing in energy, using it to maintain order, and exporting entropy to their surroundings.
This process begins with molecular self-organization and scales up through evolution. Natural selection favours systems that can more effectively capture energy, reduce uncertainty, and persist in fluctuating environments. Over generations, this results in increasingly sophisticated control loops—from biochemical feedback in cells to sensorimotor coordination in animals.
Central to this story is homeostasis: the ability of organisms to stabilize critical variables (e.g., temperature, glucose levels, threat detection) through negative feedback. These systems operate as biological thermostats, maintaining internal conditions within viable bounds. Over time, evolution layered these thermostats into ever more complex architectures, culminating in brains that do not just react but anticipate.
The principle of chaos attractors integrates elegantly here: when homeostatic control fails due to overwhelming disruption, survival may depend on the system's ability to reorganize around a new attractor. Evolutionary mutations, epigenetic changes, or novel neural patterns can push a system through chaos into a fresh basin of stability. Thus, chaos becomes not a threat but a crucible for innovation.
3. THE BRAIN AS A PREDICTIVE INFERENCE ENGINE
The human brain is the most advanced regulatory system known. Far from being a reactive processor, it functions as a prediction machine: constantly inferring the causes of sensory inputs and updating internal models to minimize prediction error. This idea is formalized in Karl
Friston’s Free Energy Principle, which treats cognition as a variational process aimed at
reducing the divergence between expected and actual sensory states.
According to this framework, the brain is engaged in active inference: not only perceiving but acting to align the world with its predictions. It resists entropy by updating beliefs or modifying the environment—whichever minimizes surprise more efficiently. Perception, learning, and behaviour are all interpreted as part of a recursive loop that aims to reduce free energy.
This aligns with and extends earlier concepts like John Adams’ "risk thermostat," which proposed that humans adjust behaviour to maintain a subjective sense of acceptable risk. It also echoes the multi-layered architecture described in modern neuroscience: from reflex arcs to reinforcement learning to simulation and metacognition. Each layer buffers uncertainty differently but shares the same recursive goal: to remain viable.
4. BOYD'S DESTRUCTION AND CREATION: COGNITIVE ADAPTATION TO UNCERTAINTY
John Boyd’s Destruction and Creation model offers a complementary lens through which to view this process. In conditions of uncertainty, rigid mental models become maladaptive. Boyd argued that successful agents continually dismantle obsolete frameworks (destruction) and synthesize more coherent alternatives (creation). This cognitive entropy management mirrors biological and thermodynamic processes.
The OODA loop (Observe–Orient–Decide–Act) captures this cycle in strategic terms. It is a process of real-time entropy regulation via perception, model revision, and adaptive action. Boyd’s insights align seamlessly with Friston’s: both view cognition as a recursive updating system that must remain metabolically and epistemologically aligned with an unpredictable world.
Importantly, Boyd emphasizes the speed and adaptability of these cycles. The strategic advantage lies not in perfect prediction, but in the ability to reorient faster than the environment (or adversary) can destabilize you. This agility, or meta-homeostasis, is the ultimate form of entropy resistance.
5. CONSCIOUSNESS AND THE EVOLUTION OF DELIBERATE CONTROL
The most recent and perhaps most remarkable development in this entropic arms race is human consciousness: the capacity to reflect on models, simulate futures, and choose among alternatives. Conscious decision-making allows us to regulate not just bodily states but entire ecologies of meaning, culture, and shared norms.
This meta-cognitive layer transforms the thermostat into a planner, the feedback loop into a strategist. It enables societies to codify safety, legislate behaviour, and build institutions that extend risk regulation beyond the individual. But it also burdens us with paradoxes: we can anticipate failure yet still act irrationally; we can imagine futures we cannot physically reach.
In this sense, consciousness is both a triumph and a vulnerability. It grants unprecedented entropy-buffering capabilities but also reveals the fragility of the models we depend upon. The capacity to destroy and recreate these models—as Boyd envisioned—is not just a tactical skill but an existential one.
CONCLUSION: ENTROPY, INTELLIGENCE, AND THE FUTURE OF EQUILIBRIUM
Life emerged to manage entropy. Intelligence evolved to refine that management. Consciousness arose to model the management itself. Across this arc, from the Big Bang to the brain, we witness a single imperative: the recursive, creative regulation of quasi-equilibrium.
Safety, in this framework, is not a state but an act. It is the continuous balancing of energy, information, and belief in the face of thermodynamic decay. It is enacted through reflexes, habits, simulations, and strategic decisions. And it is most endangered when models become rigid—when we fail to destroy in order to create.
In an age of accelerating complexity, the lesson is clear. Survival will not be granted to those who seek stasis, but to those who can revise, adapt, and imagine. The challenge is not to predict the future, but to remain coherent within it. That is the final thermostat. That is intelligence resisting the silence.
REFERENCES
Adams, J. (1995). Risk. London: UCL Press.
Boyd, J. (1976). Destruction and Creation. Unpublished briefing, U.S. Air Force. Available from: https://www.coljohnboyd.com
Clark, A. (2013). Whatever next? Predictive brains, situated agents, and the future of cognitive science. Behavioural and Brain Sciences, 36(3), 181–204. https://doi.org/10.1017/S0140525X12000477
Friston, K. (2010). The free-energy principle: A unified brain theory? Nature Reviews Neuroscience, 11(2), 127–138. https://doi.org/10.1038/nrn2787
Friston, K., FitzGerald, T., Rigoli, F., Schwartenbeck, P., & Pezzulo, G. (2017). Active inference: A process theory. Neural Computation, 29(1), 1–49. https://doi.org/10.1162/NECO_a_00912
Friston, K., & Stephan, K. E. (2007). Free-energy and the brain. Synthese, 159(3), 417–458. https://doi.org/10.1007/s11229-007-9237-y
Gleick, J. (1987). Chaos: Making a new science. New York: Viking.
Lorenz, E. N. (1963). Deterministic nonperiodic flow. Journal of the Atmospheric Sciences, 20(2), 130–141. https://doi.org/10.1175/1520-0469(1963)020<0130:DNF>2.0.CO;2
Nomoto, H., & Slater, D. (2022). Black Swans – Decisions under uncertainty. WESPA Discussion, 20th June 2022.
Pickering, A. (2010). The Cybernetic Brain: Sketches of Another Future. University of Chicago Press.
Schultz, W. (2016). Dopamine reward prediction error coding. Dialogues in Clinical Neuroscience, 18(1), 23–32.
Slater, D. (2024). Big Bang to Silence? The Role of Thermodynamics in the Evolution of Complexity. ResearchGate. https://doi.org/10.13140/RG.2.2.21581.45280
Slater, D. (2025). What is Safety? A Basic Instinct or a Scientific Definition? ResearchGate. https://doi.org/10.13140/RG.2.2.28021.90086
Walter, W. G. (1950). An imitation of life. Scientific American, 182(5), 42–45. https://doi.org/10.1038/scientificamerican0550-42
Wilde, G. J. S. (1982). The theory of risk homeostasis: Implications for safety and health. Risk Analysis, 2(4), 209–225. https://doi.org/10.1111/j.1539-6924.1982.tb01384.x
Wilde, G. J. S. (1994). Target risk: Dealing with the danger of death, disease and damage in everyday decisions. Toronto: PDE Publications.
Reminds me of Dempsey’s “A Universal Learning Process”: https://a.co/d/4ngep4b