January 18, 2026 · Podcast · 42min
Why Every Brain Metaphor in History Has Been Wrong
Every era gets the brain metaphor it deserves. Descartes thought the nervous system worked like the hydraulic automata in French royal gardens. When scientists discovered electrical signals in nerves, the brain became a telegraph network, then a telephone switchboard. Now it’s a computer. The pattern is so consistent it might itself be a law of nature: we always explain the brain by analogy to our most sophisticated technology. And we always, eventually, realize the analogy was a simplification we forgot was a simplification.
The Episode
Machine Learning Street Talk brings together clips from past interviews with seven thinkers — philosopher Mazviita Chirimuuta, Francois Chollet, Joscha Bach, Luciano Floridi, Noam Chomsky, Nobel laureate John Jumper, and neuroscientist Karl Friston — to wrestle with a deceptively simple question: when scientists simplify reality to study it, what gets captured and what gets lost?
The episode opens with a story about young Karl Friston watching woodlice in his garden, noticing they moved faster in shade and slower in sun. That childhood observation became the seed of the free energy principle, which attempts to explain all behavior with a single equation. Host Tim Scarfe calls it “the ultimate spherical cow” — a reference to the old physics joke about grotesquely simplifying messy reality. Friston himself agrees the principle is “almost logically simple,” which is precisely the problem Chirimuuta’s work addresses.
Simplicius vs. Ignorantio
Scarfe frames the central tension as a boxing match between two philosophical positions:
Simplicius believes science works because the universe is actually simple underneath. Find an elegant equation and you’ve found something true. Galileo, Newton, Einstein all held this view. Einstein’s “God doesn’t play dice” was an expression of faith that reality is fundamentally legible.
Ignorantio believes we simplify because we’re too limited to do otherwise. Our working memory holds maybe seven items. We die after 80 years. So we build models, leave stuff out on purpose, tell ourselves stories. When those stories work, it doesn’t prove nature is simple; it proves we’ve gotten good at building useful simplifications.
Chirimuuta has “gone all in” on Ignorantio’s position. She borrows a phrase from the philosopher Nicholas of Cusa: docta ignorantia — learned ignorance. You study hard, you learn a lot, and what you learn includes what you don’t know.
The Kaleidoscope Hypothesis
Francois Chollet offers a competing vision. His “kaleidoscope hypothesis” proposes that beneath the apparent chaos of reality lie simple, repeating patterns — like the few bits of colored glass in a kaleidoscope that, through mirroring and repetition, create tremendous richness. Intelligence, in his view, is the process of mining experience to extract these “atoms of meaning.”
“A big part of intelligence is the process of mining your experience of the world to identify bits that are repeated and to extract them — extract these unique atoms of meaning. When we extract them we call them abstractions.”
Chirimuuta’s response is not that Chollet is wrong but that he’s making a philosophical bet — the same bet Plato made. The world of appearance is complicated and messy, but underlying it, real reality is neat and mathematically decomposable. It might be right. It might be Platonic wishful thinking.
Software Is Spirit — Literally?
Joscha Bach makes the most provocative claim in the episode: software is literally spirit, not metaphorically. His argument runs through causal invariance. Money isn’t the ink on a banknote or the electrons in a bank server; it persists across physical instantiations (paper, coins, gold, digital ledgers) yet causally affects the world — it gets you fed, starts wars, builds cities. Software, Bach argues, is the same: an abstract pattern that runs on many types of chips, maybe even neurons, and that pattern has causal power.
He introduces a striking concept: the computer as a “causal insulator.” A computer creates a layer on which an arbitrary reality can exist. Minecraft runs identically on a Mac and a PC; from inside that world, you have no information about the CPU, the casing color, or the voltage. Our brain, Bach argues, is a similar causal insulator — it lets us have thoughts independent of what happens around us, envision futures untainted by the present.
Scarfe pushes back through Chirimuuta’s lens: who identifies the “invariance”? When we say the same algorithm runs on different chips, completely different things are physically happening — different voltages, different electrons. The “sameness” is something we impose. It exists in our description, not in nature. And money only works because of human interpretive practices; take away the humans and their agreements, it’s just paper.
Knowledge Is Not a Thing
Luciano Floridi draws a fascinating analogy to the history of temperature. For 2,000 years, humans thought hot and cold were two separate substances that could be mixed — “temperature” literally meant “mixture.” They believed heat was an invisible fluid that grabbed onto things. It took the observation that boring a cannon hole produced seemingly infinite heat for Joule to realize: temperature is not a thing with its own particle. It’s a property that matter has.
Knowledge, Floridi argues, is similar. It always exists in some physical medium — electromagnetic waves from a Wi-Fi router are technically a physical embodiment — but it isn’t a thing itself. It holds onto you, onto me, onto the collective, but it doesn’t have independent physicality.
His most powerful move is reframing the entire metaphysics debate. When someone asks “Is the universe a giant computer?” the question is meaningless without specifying why you’re asking. Is this the same building? Depends: for directions, absolutely yes; for function, it was a school and now it’s a hospital. Is it the same ship (the Ship of Theseus)? If the tax man is asking, yes. If a collector is asking, worthless.
“Is it worth modeling the universe as a gigantic [computer] for the purpose of making sense of our digital life? Oh yes, definitely. Because we are informational organisms.”
The computational model isn’t literally true, Floridi argues, but it’s useful. The mistake is forgetting it’s a model.
Prediction, Control, Understanding
Nobel laureate John Jumper offers the cleanest framework in the episode. He distinguishes three things that we routinely conflate:
- Predict: given an input, what value will appear on my screen?
- Control: I want that future measurement to come out 17.
- Understand: I have such a small collection of facts that I can predict and communicate those facts to another human on an index card.
AI lets us predict and control. We have to derive our own understanding. We can now experiment on 200 million predicted protein structures instead of the 200,000 experimental ones, but the machine doesn’t do the act of understanding for us.
Chirimuuta sharpens this into a genuine dilemma: you can pursue understanding or you can pursue prediction, but they pull against each other. LLMs and deep neural networks are being used as predictive models of neuronal responses that lack the mathematical legibility that scientists traditionally aspired to. If the field optimizes for prediction, it may give up on the kind of understanding that tells you when your tools will break.
The Cultural Historical Illusion of AGI
Chirimuuta introduces what may be the episode’s most provocative idea: our confidence that AGI is inevitable might be a “cultural historical illusion.” The argument traces the history of mechanistic thinking in the life sciences and psychology — centuries of intellectual shifts toward viewing life and mind as mechanisms. If you already believe that cognition is “just mechanisms anyway,” then of course you can build a machine to replicate it. AGI feels inevitable.
But if the mechanistic hypothesis is wrong, these claims for the inevitability of biological-like AI would not be well-founded.
“We could be subject to a kind of cultural historical illusion that this is just going to happen.”
Scarfe acknowledges the tension honestly: Claude Code has produced “more interesting stuff in the world of software development in the last 6 months than the previous 20 years.” But, he notes, it’s automation technology — it’s only as good as your ability to specify, supervise, and delegate. The question isn’t whether the tools are impressive but whether impressiveness equals intelligence.
Haptic Realism and Perspectival Knowledge
Chirimuuta proposes that knowledge works more like touch than vision. Most philosophy of science treats knowledge like seeing — you stand back and observe reality from a distance. But scientific knowledge involves meddling: poking, prodding, stimulating, modeling. The patterns that emerge are real, but they’re also partially created by the process of investigation.
She extends this into a critique of LLMs. They aspire to be an “every person voice” — knowledge blended from all perspectives into a god’s-eye view. But it’s precisely because they don’t have a particular socialization into a finite community that they’re unreliable. Knowledge is perspectival, inherently from a human point of view, inherently finite. A book doesn’t have knowledge; it’s an archival record. You can’t throw engineering manuals and cement into a gorge and expect a bridge. Teams have knowledge. Organizations have knowledge. Knowledge is social and situated.
Chomsky’s Cognitive Horizon
Noam Chomsky offers the episode’s most humbling framing. If we are organic creatures, we have bounds to our cognitive capacities — just as a rat can be trained to run complicated mazes but can never learn a prime number maze because it simply doesn’t have the concept.
“I suspect there’s reasons to suppose we’re like rats. We have capacities. We have a nature. We have a structure. They yield all sorts of extensive range of things that we can do, but they probably impose limits.”
Our best theories bump against the walls of our cognitive horizon. Maybe even knowing where the walls are is valuable in itself.
Some Thoughts
This episode is less a debate with a winner and more a map of the philosophical terrain that AI research is built on — terrain most practitioners never examine.
- The pattern of brain metaphors tracking cutting-edge technology is not just a historical curiosity; it’s a warning about the computational metaphor we’re living inside right now. Every previous metaphor felt just as self-evident to its era.
- Floridi’s reframing is the most practically useful idea here: stop asking “Is X true?” and start asking “Is X useful for this purpose?” The answer to “Is the brain a computer?” is not yes or no; it’s “Tell me why you’re asking.”
- The prediction-vs-understanding tension Jumper and Chirimuuta identify will define the next decade of AI research. We’re getting staggeringly good at prediction while the tools for understanding haven’t advanced at the same rate.
- Chirimuuta’s “cultural historical illusion” deserves more airtime. The certainty in Silicon Valley about AGI’s inevitability looks different when you realize it sits on top of centuries of mechanistic assumptions about mind — assumptions that are philosophical bets, not established facts.
- Bach’s “causal insulator” concept is genuinely novel, but Scarfe’s pushback lands: the “sameness” we see across substrates may be an artifact of our descriptions, not a feature of nature. This is the deepest fault line in the philosophy of mind, and no one in the episode resolves it.