The Map Eaters
On computational ontology, Gödel’s limits, and why we keep mistaking our best tools for ultimate truth
We the People We the Map Eaters
Geocentrism worked because we put ourselves at the center of our maps and then genuinely forgot we’d done the putting. Our original sin; our first map dinner.
Humans are entitled to exactly one thing: baseless self-importance. We excel at it. We’re so committed to our own centrality that we keep building frameworks for describing reality, then forgetting they’re frameworks and announcing we’ve discovered what reality fundamentally is. The specific framework changes. The confident proclamation doesn’t.
My brother recently sent me a physics paper arguing that Gödel’s incompleteness theorems prove the universe can’t be a simulation. The logic: computation has inherent limits, therefore reality must transcend computation, therefore not a simulation. The counterargument from others: everything is computation anyway, so the question is irrelevant. Both sound sophisticated. Both invoke serious mathematics. Both are making the same error people have made about Newtonian mechanics, quantum wavefunctions, and every other powerful descriptive tool we’ve built: mistaking the map for the territory, then arguing about what the territory is made of by analyzing the paper.
What Formalization Actually Does
Computation isn’t what reality is. It’s the boundary of what we can formalize. It describes the portion of reality that submits to algorithmic structure, what fits into stepwise procedure, what aligns with how information-processing systems operate.
This makes computation extraordinarily powerful. We model weather systems, simulate molecular behavior, compress human knowledge into vectors that machines manipulate. Computation extends our cognitive reach in ways that would have seemed like magic a few decades ago. It’s perhaps our best tool for making certain patterns legible.
But “best tool for making legible” is not “fundamental substrate of what exists.”
When someone claims everything is computation, they’re making an ontological assertion based on an epistemological capability. The logic runs: “This formalization method works really well for describing lots of things, therefore it must be what those things fundamentally are.”
That’s like concluding the territory is made of longitude and latitude because GPS navigation works.
The Gödel Gambit (And What It Reveals)
The physicists deploy Gödel to show computation has limits. Any consistent formal system contains truths it cannot prove. Therefore, they conclude, reality must be grounded in something non-computational, something that transcends algorithmic description.
But Gödel’s incompleteness theorems constrain *formal systems*. They’re limits on the map, not the territory. What Gödel proved is that our tools for systematic description have inherent boundaries. That’s a fact about formalization, not a revelation about reality.
The physicists look at these boundaries and say: “Aha! Reality must transcend computation!” But this preserves the same assumption: that our categorical frameworks (computational versus non-computational, algorithmic versus non-algorithmic) are carving reality at its joints rather than carving reality at the joints of our perceptual apparatus.
The computationalists claim: “Computation describes so much, it must be fundamental.” The anti-computationalists claim: “Computation can’t describe everything, so something else must be fundamental.” Both positions assume human frameworks reveal ontological truth. The error is identical, just reversed.
The Mantis Shrimp Problem
The mantis shrimp has sixteen types of color photoreceptors. Humans have three. You might think it perceives “more” colors, some expanded palette that includes everything we see plus extras. But that’s not what happens. It has fundamentally different color ontology. Its perceptual reality is incommensurable with ours, not additive.
The mantis shrimp isn’t seeing additional colors we’re missing. It’s structuring color space through categories our visual system cannot formulate. If mantis shrimp developed color theory, it wouldn’t look like human color theory with bonus chapters. The categories themselves would be alien.
Apply this to computation. What if computation isn’t a universal feature of reality but a framework generated by our specific cognitive architecture? What if it’s how minds structured like ours process pattern and structure, but not how reality “thinks” about itself (if that question even makes sense)?
Other hypothetical minds, perceiving different primitives, organizing causality through different categories, might not have the concept of “computation” at all. They wouldn’t be computing worse or better. They’d be formalizing reality through frameworks as foreign to us as sixteen-receptor color space.
We know our perceptual scope is narrow. We know our sensory apparatus operates in constrained ranges. We’ve built formal systems (mathematics, logic, computation) that extend our reach. But those systems inherit the structural constraints of the minds that built them.
Computation is how *we* make certain aspects of reality systematic. It’s the interface between human cognition and machine processing. That doesn’t make it the interface reality uses with itself.
The Recurring Trap
Here’s the pattern that keeps catching us: a new framework works spectacularly well within its domain. We can suddenly describe, predict, manipulate things that were previously intractable. The framework’s power is undeniable.
So we make the leap. We stop saying “this framework describes certain patterns really well” and start saying “reality is made of this.” We mistake explanatory power for ontological revelation.
Newtonian mechanics worked so well that absolute space and time seemed obviously true. Then relativity worked so well that curved spacetime seemed fundamental. Then quantum mechanics worked so well that wavefunctions seemed like bedrock. Now information theory works so well that information seems like the ultimate substrate.
Each framework extends our reach. Each makes new patterns legible. None has managed to transcend the fact that it’s a framework, a map drawn by minds like ours, for minds like ours, describing the territory through the lens of what minds like ours can formalize.
The computationalist and anti-computationalist positions are the contemporary version of arguments about whether reality is fundamentally mechanical (seventeenth century), fundamentally electromagnetic (nineteenth century), fundamentally quantum (twentieth century). Each framework works brilliantly. Each generates true predictions. None is the thing itself.
What We’re Actually Describing
Computation is our best cross-species language. It’s the clearest interface between human understanding and machine processing. It’s extraordinarily useful for formalizing patterns that submit to algorithmic structure. This is where it excels; I mean, not to state the obvious, but this is the whole point.
I am admittedly biased, Barbie’s job is computer. I have never claimed to have an opinion on reality, but of course I have an abundance of opinions about all sorts of algorithms.
But don’t confuse utility with ontology. Don’t mistake the prosthetic for the limb. The question isn’t whether computation is useful (it obviously is) or whether it describes important patterns (it clearly does).
The question is whether frameworks generated by human cognition, constrained by our sensory apparatus and neural architecture and information-processing limits, can meaningfully describe what reality fundamentally *is* at all.
Maybe computation is the substrate. Maybe it’s something non-algorithmic (whatever that means). Maybe it’s something our cognitive architecture cannot even formulate as a coherent question, the way a mantis shrimp cannot ask “what color exists between red and green” in terms we’d recognize.
The Honest Position
The simulation question is a distraction. It assumes computation is ontologically fundamental and argues about who’s running the program. But that assumption is already the error.
The real question is whether any framework generated within our cognitive boundaries can describe what lies beyond those boundaries. And the answer, I think, is that we’re always describing the map. We never quite touch the territory itself, no matter how convinced we are that this time, finally, we’ve found the fundamental layer.
This isn’t nihilism about knowledge. We can know things. We can make better maps. We can extend our reach through mathematics, computation, formal systems that push past direct perception.
But we should stop eating the map and calling it dinner. We should stop mistaking the fact that GPS works for the claim that the territory is made of coordinates. We should recognize that computation is a powerful tool for describing patterns we can systematize, not a revelation about what reality is made of.
The territory remains, vast and indifferent, beyond all our maps. That’s fine. The maps are useful. They’re just not the thing itself.
The Part We Keep Forgetting
Every time we develop a better formalization (calculus, formal logic, computation, neural networks), we mistake the tool’s effectiveness for metaphysical truth. Every time we make a more detailed map, we convince ourselves we’ve finally touched the territory.
The pattern is so reliable it stops being funny. We’re very good at building frameworks and consistently terrible at remembering they’re frameworks. We’re map makers who keep eating our own cartography, convinced that *this* map, finally, tastes like the territory itself.
It never does. It never will.
And we’ll forget this again the moment someone invents a better map.
The computationalists will continue insisting everything is algorithmic substrate. The anti-computationalists will continue deploying Gödel to prove transcendence. Both will miss the point with impressive consistency. Because the point isn’t about which framework wins. The point is that we’re holding frameworks at all.
We’re not bad at making maps. We’re spectacularly good at it. We’re just catastrophically bad at remembering they’re maps. That distinction matters more than which particular map we’re currently eating.
The territory doesn’t care what we call it. It doesn’t care how we formalize it. It doesn’t care whether we think it’s computational or non-algorithmic or divine or mechanical or quantum or informational. It remains what it is, indifferent to our categories, untouched by our frameworks, vast beyond our perceptual boundaries.
And that, perhaps, is the only thing we can know with any confidence: the territory exceeds the map. Always has. Always will.
The question is whether we can hold that knowledge longer than it takes to invent the next framework.
Unfortunately, as a species, we have an alarmingly right-skewed distribution, so I wouldn’t bet on it. I may sound misanthropic, but I assure you I am
P.S. big fan of maps though

