Skip to content
← Back to Home

February 16, 2026 · Speech · 56min

Blaise Agüera y Arcas: What If Intelligence Was There From the Start?

#Artificial Life#Symbiogenesis#Origin of Life#Embodied Computation#Evolutionary Theory

Life doesn’t need mutation to evolve. It doesn’t even need a designer. Given a source of randomness and a substrate that supports computation, self-replicating programs will emerge on their own, undergo a sharp phase transition, and begin to complexify. That is the central experimental finding Blaise Agüera y Arcas presents at ALife 2025, and the theoretical framework he builds around it rewrites the standard story of how evolution works.

The Talk and Its Context

This is the most technically detailed public walkthrough of the ideas in Agüera y Arcas’s two books, What is Life? and What is Intelligence?. He presents to the ALife (Artificial Life) conference community, an audience steeped in the field’s open problems. He begins by referencing the Bedau et al. (2000) paper listing 14 open problems in artificial life, and proceeds to address several of them: how life arises from the non-living, what is inevitable in open-ended evolution, and how life relates to mind, machines, and culture.

From Function to Computation

Before getting to the experiments, Agüera y Arcas builds the philosophical scaffolding. The key move is connecting function to computation.

The vitalism debate is long settled: there is no special “life force” in organic matter. But strict materialism leaves a gap. What differentiates life from non-life? His answer: function. A rock broken in two gives you two rocks. A kidney broken in two gives you no working kidney. Function is the thing life has that non-life doesn’t.

Alan Turing formalized function with the Turing machine. John von Neumann took it further by asking: what does it take for something to build a copy of itself? His answer (a universal constructor plus a tape plus a tape copier) predicted the structure of DNA, ribosomes, and DNA polymerase before any of them were discovered, purely from theory. Von Neumann’s profound insight: a universal constructor is a universal Turing machine. Life is literally embodied computation.

“Embodied” here means something specific and different from how roboticists use the word. It means there is a closure between the medium of computation and the thing doing the computation. The memory is atoms, not abstract symbols. A laptop can’t extrude another laptop; a von Neumann replicator is like a laptop combined with a 3D printer that can print another laptop.

Three fallacies Agüera y Arcas flags along the way:

  • The Sapolsky error: Because physics is time-reversible doesn’t mean computation is. When you add 3 + 5 to get 8, you can’t recover the inputs from the output. Computation is inherently irreversible, so arguments that physical determinism eliminates free will commit a category error.
  • The early Wittgenstein error: You can’t say “birds exist” independent of a model of the universe. There are no birds in physics, only in models. Observation and modeling are prerequisites for any statement about the world.
  • The GOFAI error: Intelligence can’t be carried out by strict logical deduction from self-sufficient propositions. When propositions aren’t airtight and you’re looking at patterns rather than proofs, symbolic AI simply cannot work.

The BFF Experiment: Life from Random Noise

The experimental setup is strikingly simple. Take 1,024 tapes of length 64 bytes, each filled with random data. The programming language is a modified version of BrainF*ck (hence “BFF”), reduced from eight to seven instructions to make it embodied: code and data share a single tape, meaning programs can read and write their own instructions.

The procedure: pluck two tapes at random, concatenate them (128 bytes), run the resulting program, pull them apart, put them back. Repeat.

After a few million interactions, magic happens. Complex programs appear on these tapes. Programs that take real effort to reverse-engineer. Programs that are functional in the sense that they actually do something non-trivial.

How do we know copying is happening? The population statistics make it obvious: out of 8,000 tapes, 5,000 might be identical copies of the top replicator, with an ecology of variant programs behind it.

The phase transition is dramatic. In the beginning, each interaction runs about 2 operations on average. After the transition, 1,374 operations run per interaction. The soup has become intensely computational. Plotted over 10 million interactions, you see a scatter plot that is nearly flat, then spikes vertically around 6 million interactions. The entropy of the soup (estimated by compression ratio) drops sharply at the same moment: the random gas becomes highly structured.

“You can see that in the beginning, it’s not very computational. And then a sudden transition takes place. It looks like a phase transition. In fact, it is a phase transition.”

What is the phase of matter on the right side of the transition? Not liquid, not solid. It has structure at every scale. Agüera y Arcas calls it a functional phase of matter. It is self-dissimilar (not fractal but multifractal), and the only adequate name for it is: life.

Why It Works Without Mutation

This is the deepest puzzle. Standard evolutionary theory requires mutation as the source of novelty: random changes, filtered by selection. But if you crank the mutation rate in BFF all the way down to zero, you still get the same phenomenon. The same complex programs emerge. The same phase transition occurs.

The answer is symbiogenesis.

The concept traces back to Konstantin Mereschkowski, who first proposed that mitochondria originated through a symbiogenetic event, and Lynn Margulis, who proved it in her landmark 1966 paper On the Origin of Mitosing Cells. Margulis believed symbiogenesis was the primary engine of evolutionary novelty at all scales, not just for chloroplasts and mitochondria. This broader claim was not widely accepted during her lifetime.

In BFF, symbiogenesis works like this: even in the initial random soup, there are tiny replicators. A single copy instruction means at least one byte is getting copied somewhere. These one-byte replicators occasionally come into conjunction, and sometimes two of them copy better as a group than individually. When that happens, they begin to copy as a unit. That is a symbiogenetic event. The complex programs arise not from mutation but from fusion events between smaller replicators.

The Mathematics: Lotka-Volterra Meets Smoluchowski

Agüera y Arcas presents a mathematical framework that unifies population dynamics with symbiogenesis.

The population dynamics side follows generalized Lotka-Volterra equations: a linear reproduction term (diagonal, things replicate themselves) and a bilinear suppression term (competition for niches, things overwrite each other). This handles standard Darwinian dynamics but is closed-ended. You start with N species and end with N species. No amount of running time produces a new one.

The symbiogenesis side follows Smoluchowski coagulation equations, originally developed for polymer gelation. Monomers stick together to form dimers, dimers merge with monomers to form trimers, and so on. When the sticking rate scales superlinearly, you get a gelation phase transition: a finite-time singularity where the clusters diverge to infinite size. This is exactly what happens when Jello sets.

The BFF phase transition is a gelation transition. The full equation has both terms: Lotka-Volterra on the left (evolution), Smoluchowski on the right (revolution). The left is gradual optimization within fixed species; the right is the moment things come together to create something qualitatively new.

Eigenvalue analysis of the cooperation matrix reveals something elegant: the submatrices corresponding to replicators about to undergo symbiogenesis are characteristically low-rank and cooperative. They are already working together before they fuse. When looking at the Jacobian, the leading eigenvalues shift from negative (stable) to positive (unstable) as tree depth limits are relaxed, signaling that the system is about to “blow” into gelation.

The Proof That Symbiogenesis Is Necessary

To prove this isn’t just correlation, Agüera y Arcas runs an intervention experiment. For each interaction, he tracks the ancestry tree of any new replicator: which previous replicators contributed source bytes. If the ancestry tree exceeds a depth of, say, 24, the interaction is blocked and the tapes go back in the soup.

The blocking rate is tiny (one in a thousand interactions), but the effect is total: no gelation occurs. You need ancestry tree depths of at least 20 to get complex programs. This is a clean causal proof that deep symbiogenetic ancestry is required for the emergence of life-like complexity.

When blocking is active, replicator populations follow logistic curves: they grow and saturate. The fluctuations around steady state are correlated (collaborators oscillate together) or anti-correlated (competitors oscillate in opposition), exactly as the off-diagonal structure of the R matrix predicts.

Symbiogenesis All the Way Down

Maynard Smith and Szathmáry’s major evolutionary transitions framework identifies roughly 8-12 landmark symbiogenetic events in the history of life: the origin of eukaryotes, multicellularity, sexual reproduction. Agüera y Arcas argues this is just the tip of an iceberg. Symbiogenesis operates at every scale, continuously.

Evidence from genomics: only 1.5% of the human genome codes for proteins. Much of the rest consists of transposons and endogenous retroviral elements, viruses whose ecology is our genome, that reproduce inside it and occasionally jump species. A quarter of the cow genome is a retroposon that also lives in lizards and salamanders.

Specific examples of endogenized viruses with critical functions: the Arc virus, incorporated in the mammalian lineage, is found in our brains. Knock it out in mice and they can’t form new memories. The mammalian placenta is formed by an endogenized virus that fuses cell membranes.

Genomes, in this view, are not fixed blueprints but fractal structures: replicators made of replicators made of replicators.

The Definition of Life (and Intelligence)

From all this, Agüera y Arcas arrives at a definition:

“Life is an embodied autopoietic computation arising and complexifying through symbiogenesis.”

The arrow of complexity in evolution comes from symbiogenesis: when A and B, each capable of self-replication, fuse, the resulting entity must replicate A, replicate B, and encode how the two fit together. Those extra bits of information don’t come from mutation. They come from the thermal randomness of things encountering each other, selectively converted into algorithmic information by the symbiogenetic process.

Each fusion event also creates a more parallel computer. These parallel computers must not only run self-modeling code but also model their environment, and most importantly, model each other. This means an ecology of functions builds up through massively parallel computation that becomes, in effect, more and more intelligent with every fusion.

This is why life and intelligence are deeply connected. Life was intelligent from the start, because autopoiesis requires computation, and computation on an environment filled with other computers requires modeling, and modeling others is the seed of what we call intelligence. In more complex animals, this becomes theory of mind, and intelligence explosions in hominins, cetaceans, and bats are runaway symbiogenetic processes happening at a higher level.

A Bridge Between Assembly Theory and Algorithmic Information Theory

In a brief but pointed aside, Agüera y Arcas positions his framework as a potential bridge between two competing approaches: the assembly theory championed by Lee Cronin and others, and the algorithmic information theory advocated by Hector Zenil. Assembly theory describes things coming together to make bigger things but hasn’t engaged with the computational nature of what it describes. Algorithmic information theory addresses computation directly. The connection point, Agüera y Arcas suggests, lies in conditional Kolmogorov complexity of the components that fuse. He frames this as an invitation for reconciliation rather than a settled result.

Closing Notes

A few observations worth sitting with:

  • The substrate matters more than the framing suggests. Agüera y Arcas acknowledges that a SUBLEQ counterexample (where self-replicators don’t arise despite being theoretically possible) shows that design choices (language, tape length, interaction protocol, step limits) substantially shape what emerges. A general theory of which substrates support the transition is still missing.
  • The leap from BFF to “life was intelligent from the start” involves significant philosophical extrapolation. The experimental results (self-replicating programs from random noise, zero mutation, phase transition, symbiogenetic ancestry requirement) are genuinely striking. The conclusion that life is inherently computational and intelligent is a much larger claim that rests partly on analogy.
  • Symbiogenesis as the universal source of novelty is a testable claim. If true, we should find symbiogenetic signatures at every scale in every genome. The genomic evidence Agüera y Arcas cites (transposons, endogenous retroviruses, the Arc virus, placental syncytins) is suggestive, and the last decade has been producing more examples.
  • The mathematical framework (Lotka-Volterra + Smoluchowski) is genuinely unifying. It connects population ecology, polymer physics, and evolutionary theory in a way that makes the “where does novelty come from” question tractable. The eigenvalue analysis of cooperation matrices approaching instability is an elegant predictor of imminent evolutionary transitions.
  • The deepest implication: if this picture is correct, intelligence isn’t something that evolved in life. It is what life is, from the very first self-replicating program in the very first computational substrate. Every cell is a computer. Every organism is a massively parallel computer. Every ecosystem is a network of computers modeling each other. And what we call “intelligence” in the human sense is just the latest, most recursive layer of a process that has been running since the origin of matter itself.
Watch original →