January 23, 2026 · Podcast · 54min
The Brain Isn't a Computer, and That Matters More Than You Think
The central question Mazviita Chirimuuta raises isn’t whether AI can replicate intelligence. It’s why we find it so obvious that it could. The answer, she argues, lies in centuries of philosophical choices that nudged us toward treating minds as mechanisms and knowledge as disembodied information. Her book The Brain Abstracted traces those choices and their consequences.
The conversation
Chirimuuta joins Machine Learning Street Talk host Tim Scarfe for a wide-ranging discussion on the philosophy of neuroscience. She comes from a dual background: trained as a neuroscientist working on visual cortex at Harvard, she shifted to philosophy of science and now teaches at the University of Edinburgh. The conversation is substantive and probing, with Scarfe pushing back on several points, particularly around whether biological specificity truly matters for cognition.
The cautionary tale of reflex theory
Chirimuuta opens with a historical case study that sets the tone for everything that follows. In the late 19th and early 20th century, reflex theory dominated how scientists understood the nervous system. The idea was elegant: stimulus goes in, response comes out, and the brain is essentially a relay station.
The theory was productive. It generated real experimental results. But it also systematically blinded researchers to the complexity of what the nervous system actually does. The reflex arc was a useful abstraction that got mistaken for the full picture.
Her point is not that abstraction is bad. Scientific progress requires simplification. The danger comes when researchers “ontologize” their abstractions, when they move from “it’s useful to model the brain this way” to “the brain is this way.” That shift, from methodological convenience to metaphysical commitment, is where fields go astray.
Is the brain really a computer?
This is the book’s central challenge. Chirimuuta carefully distinguishes between two claims that get conflated:
-
Computational modeling is useful for neuroscience. She agrees with this entirely. Computational methods have been enormously productive.
-
The brain is a computer. This she rejects. The fact that you can model something computationally doesn’t make it computational. You can computationally model the weather, fluid dynamics, or a rock. That doesn’t make rocks computers.
She invokes Hilary Putnam’s famous argument: any open physical system can, in principle, be mapped onto a computational formalism. If brains are computers because we can model them computationally, then so is your stomach. What makes brains special?
“I’m only interested in the computational properties of the brain. I don’t need to care about all of that messy biological detail. So, it gives you a kind of tunnel vision… what I take issue with is the kind of ontologization of that, saying that because computational neuroscience is this successful field of inquiry, we know now that the brain is a computer. I think that is not an inference we should make.”
Does computation have causal powers?
Scarfe raises Searle’s argument about computation and causation. Chirimuuta’s position is unambiguous: computation is a mathematical formalism. It doesn’t have causal powers. Concrete physical systems have causal powers.
This matters for AI discourse. The computational theory of mind assumes cognition, a phenomenon in the physical world, can be explained by something (computation) that exists outside the physical realm of causation. Chirimuuta finds this explanatory gap underappreciated. If cognition is a physical phenomenon, why do we reach outside the concrete realm of causation to explain it?
The biology you can’t prune away
Scarfe offers an interesting counterargument via the lottery ticket hypothesis in neural networks. After training a dense network, you can prune 90% of connections and it still works. Maybe evolution is similar: billions of years of “training” produced all this biological complexity, but maybe it’s vestigial. Maybe you can snip it away and keep the abstract computational core.
Chirimuuta pushes back on energy grounds. Biological brains do extraordinary computation on roughly 20 watts. Artificial neural networks are orders of magnitude more expensive. If biological cognition were truly wasteful, carrying all this unnecessary biological baggage, evolution would have pruned it long ago. The efficiency itself suggests the biology is doing important work.
She also points to emerging evidence that neuronal signaling isn’t as special as we thought. Brain processes are outgrowths of biochemical signaling happening everywhere in the body. Neuronal cells aren’t distinctively “cognitive” cells; they’re extensions of how all cells communicate. If neural function is deeply entangled with metabolic cellular processes, it becomes harder to argue that a non-living machine could have the same functionality.
Haptic realism: knowledge through interaction
One of Chirimuuta’s most distinctive contributions is her framework of “haptic realism.” The core idea: knowledge isn’t about passively reading the source code of the universe. It’s something we actively construct through physical interaction with the world.
This contrasts with what she calls the “god’s eye view” tradition in philosophy, the idea that ideal knowledge means a neutral, disembodied perspective that takes in all information from above. She argues this is a mistake about what knowledge fundamentally is. We are finite, bounded knowers. Our knowledge is grounded in discrete sensory experiences, shaped by our cultural context, and constructed through an arduous process of interaction.
An LLM that “just sits there and sucks in all the information in the world” is, in her view, the latest incarnation of this philosophical error: the dream of disembodied, universal knowledge that ignores human finitude.
Agency as causal disconnectedness
The conversation takes an interesting turn when Scarfe proposes thinking about agency as “apparent causal disconnectedness.” Agents are entities whose behavior isn’t fully determined by their immediate surroundings. You have consistent beliefs that carry across situations. Your actions respond to things distant in time and space, not just proximal stimuli.
Chirimuuta agrees this captures something important. Non-living physical systems are largely constrained by what’s proximal to them. For you, something that happened in childhood can be as relevant as what’s happening in the room right now. This sensitivity to the distal, to things not immediately driving you, may be a key dividing line between cognitive systems and merely physical ones.
She also engages with Dennett’s three stances (physical, design, intentional) but diverges on their hierarchy. Where Dennett treats the physical stance as ontologically primary and the intentional stance as merely useful, Chirimuuta advocates metaphysical neutrality. Why not take intentional phenomena at face value? If talking about representations and intentionality is useful within science, why demand that it be grounded in a non-intentional physical story?
Can machines understand?
On Searle’s Chinese Room and the question of machine understanding, Chirimuuta’s position is nuanced. She doesn’t claim AI is impossible. She argues that human cognition is deeply integrated: language is bound up with sensory-motor engagement, perception is shaped by linguistic concept formation. The idea that you could detach a language faculty, replicate it in an LLM without embodiment, and get understanding “in the same way that we do” strikes her as implausible.
Scarfe pushes: what about embodied robots? Chirimuuta concedes this moves in the right direction. But she adds a deeper point: biological organisms have intrinsic meaning and salience. Being alive means always being in a situation that’s “problematic” for you. Challenges from your environment create genuine stakes. A robot would need something analogous to that existential precariousness before we could meaningfully talk about understanding.
Heidegger and the dream of transcending finitude
The conversation’s philosophical climax comes through Heidegger’s critique of technology. Chirimuuta draws on his idea that AI (cybernetics, in his time) is the culmination of a metaphysical tradition, a centuries-long trajectory from the aspiration to transcend embodiment and materiality toward a “spiritual world of pure information.”
She connects this to how we experience technology today: the cloud “floats above us,” presented as weightless, immaterial, disconnected from resource shortages and energy consumption. We like to imagine the information age as untethered from physical reality, even though it obviously isn’t.
“I think there’s something wrong with thinking that what you are as a knower is the kind of being that can float free of your environment and just sort of regard it from above and take in all the information that’s there as it is by itself without your impact on the world.”
The experiment we’re running on children
The conversation ends on a sobering note. Chirimuuta worries about children growing up with less face-to-face social interaction. Developmental psychology shows young children are predisposed to attend to faces, gaze, and social cues. If screen time reduces those formative experiences, the effects on socialization could be significant.
Scarfe raises the thought experiment of a child raised in a sealed chamber with only a computer and internet access. The child could learn clever things, but would they develop normally? We’re running a softer version of that experiment right now, Chirimuuta notes, and we don’t know the results.
She references Harry Harlow’s monkey experiments from the 1950s, where infant monkeys deprived of maternal contact developed severe behavioral problems. The parallel is uncomfortable: are we running a gentler version of deprivation on an entire generation?
Some thoughts
This conversation is valuable less for its conclusions than for the questions it forces you to sit with.
-
The reflex theory parallel is the most instructive part. Not because computational neuroscience will fail the way reflex theory did, but because it illustrates a recurring pattern: successful abstractions get mistaken for ontological truths. Every era’s dominant metaphor for the brain (hydraulic machine, telephone switchboard, computer) was productive and misleading in roughly equal measure.
-
Chirimuuta’s energy efficiency argument against the “vestigial biology” hypothesis deserves more attention in AI discourse. If evolution produced efficient cognition at 20 watts, and all that biological machinery were truly unnecessary, the selection pressure to eliminate it would have been enormous. The persistence of biological complexity is itself evidence that it’s doing important work.
-
The deepest challenge she poses isn’t to AI capability but to AI researchers’ self-understanding. The assumption that intelligence can be understood through computation alone may tell us more about the philosophical tradition we inherited than about intelligence itself.