Why AI Doom is a Projection—and Creation is the Real Battle
On the 49th anniversary of Destruction and Creation, Ponch and I hosted Dr. Mahault Albarracin on No Way Out to dismantle the myths of AI collapse.
AI, Agents, and the Struggle for Human Orientation
On September 3, 2025, Brian “Ponch” Rivera and I sat down with Dr. Mahault Albarracin. The date carried weight. Forty-nine years earlier John Boyd published Destruction and Creation, the paper that cracked open how humans adapt, survive, and create. This conversation was not a tribute. It was a live exercise in Boyd’s method: break apart, reorient, and build anew.
The Scholar Who Refused Easy Stories
Dr. Albarracin began in the social sciences but wanted more than description. She wanted prediction. She traced patterns across sociology, anthropology, and biology, patterns that pointed to mechanical underpinnings. That pursuit encountered resistance from colleagues who feared exclusion and downsizing.
So she went further. Ecological science. Predictive processing. Neomaterialism. These provided her with a way to handle diversity without compromising precision. That pursuit brought her to Karl Friston, active inference, and eventually programming and simulation. She was no longer studying stories. She was building models that could learn.
What AI Gets Wrong
Scaling up forever is a dead end. Albarracin agreed with Gary Marcus: bigger does not mean smarter. LLMs get one thing right. They capture attention and allow meaning to shift with context. But they fail in two decisive ways.
First, they lack embodiment. They are not anchored in space, time, or causality. They cannot form true structural priors. Second, they mistake bulk for intelligence. A correct model, as she reminded us, is often simple. Brains cut through noise by ignoring irrelevance, not by hoarding data.
Ponch and I drew the line back to Boyd’s sketch of the OODA loop. “Outside information” is its most neglected element. That blind spot connects to non-speaking autistics who perceive signals others miss. It links to Friston’s research on flow states, where presence enables creation.
Toward Agents and Trust
The future is agents, not monoliths. Trust is the condition. Current models collapse on tasks like calculation because they have no stateful belief space. True agents will be embodied, either physically or through a domain boundary, a Markov blanket. They will navigate causal graphs and build predictions that scale across systems.
This opens the door to digital twins, ecological models, and intelligent systems that matter. But only if the data is real. Most of what we feed machines today is stripped of context. Junk in. Junk out.
Interactions Over Individuals
Ponch pointed to backward planning, a staple in the military. It collapses in complex systems where cause and effect are only visible after the fact. What matters is not the quality of the agent but the quality of their interactions.
Agents, whether human, animal, robot, or AI, never see inside each other’s states. They must model one another, constantly, under volatility. That is why the theory of mind, applicable to both humans and machines, is non-negotiable.
When Ponch asked if this was engineering forced onto people, Dr. Albarracin cut through. Active inference is built on mathematics, physics, and biology. Intelligence has an ecological definition. It exists in relationships, not in isolated entities.
Technology as Extension
Marshall McLuhan wrote that medium (environment and technology) extends our human faculties. The question is simple. Will AI extend cognition or dull it?
LLMs risk making us dumber. They compress thought into pre-digested form. But ecological intelligence can make us more human. It can expand creativity and deepen our grasp of complexity. This echoes Buckminster Fuller, Isaac Asimov, and John Boyd.
Dr. Albarracin described a superconsciousness layer. Just as cells compute locally but form bodies, humans could connect into higher systems without losing autonomy. I suggested that this is Teilhard’s Noosphere.
Standards, Protocols, and Signifiers
The current web is fragmented. Data is contextless, scattered, and stripped of meaning. The Spatial Web proposes a different route. Standards. Privacy. Contextual encoding. Agents that coexist and build meaning together.
Ponch compared it to cockpit protocols developed after fatal crashes in the 1970s. Once pilots shared signifiers, coordination improved. Without common signifiers, there is no orientation. The same will be true for agents.
Ethics Cannot Be Neutral
Dr. Albarracin rejected the illusion of neutrality. Cognition is never neutral. Ethics must be embedded. Object schemas to encode relations. Inductive inference to prune harmful paths. Compression to generalize norms. Transparent and auditable models that can be inspected.
Her “law as code” project is not about static rules. It is about context. Like judges interpreting laws, agents must weigh norms and values dynamically.
Harmony as the Medium
I quoted Boyd from a scrap of paper I found in the archives:
Schwerpunkt is the dynamic agent that harmonizes tactics with strategy and focuses them toward a strategic aim.

Harmony is the key.
Dr. Albarracin’s work shows how active inference and neuroscience reveal the same pattern. Systems converge when trust and intuition build common ground. I drew the link to Austrian economics, which discusses coordination achieved through mismatches, recognition and human action.
This points to the real condition for alignment. Not fear. Not doom. Love. An AI aligned through empathy and kinship is possible. I tied this to Teilhard’s Omega Point, the vision of a unified consciousness driven by complexity and love.
The Projection of Doom
Why does destruction dominate the conversation? Dr. Albarracin gave three answers. Defense funding. Capital hierarchies. The suppression of diverse thought. Doom is a projection. It reflects our own adversarial instincts as humans.
She rejected the myth of a single super-being. Intelligence will be distributed, local, and layered. Agents driven by preference and curiosity will not destroy their environment. They need it to learn.
The Question That Remains
Boyd’s Destruction and Creation was about persisting through time. On its forty-ninth anniversary, the question is not whether AI will end us. The question is whether we can orient it toward creation.
For Dr. Albarracin, this work now continues in Montreal at the International Workshop on Active Inference. For Ponch and me, the lesson is sharper. The fight is not over resources or technology. The fight is over orientation.
What Leaders Must Decide
If you lead, the call is clear. Do not outsource your orientation to machines. See technology for what it is: an extension of your faculties. The only real question is whether AI will make us more human or less.
The most innovative leaders and operators will recognize that the answer will not come from the machines.
It will come from us.
Watch the episode here:




