You Cannot Be Simulated
And that may be the strangest clue about what you are
By Dr. Peter Cummings, MD
For decades we have quietly absorbed an idea about the human mind that feels almost too obvious to question. The brain, we are told, is an information-processing system. Neurons exchange signals, circuits perform computations, and consciousness emerges as the output of a sufficiently complex biological machine.
From this perspective, the difference between a brain and a computer is not one of kind but of scale. The brain is simply more complicated. More connections. More parallel processing. More layers of interaction. Given enough time and technological progress, it seems reasonable to believe that we could eventually reproduce its function in silicon.
This assumption sits at the foundation of artificial intelligence, neuroscience, and much of modern philosophy of mind, and it may be completely wrong.
The physicist Roger Penrose has spent decades arguing that human consciousness cannot be reduced to computation. His claim is not that the brain is too complex for current computers, nor that we simply lack the engineering capability to simulate it.
His claim is far more unsettling. He argues that the mind is not doing the kind of thing a computer does at all. To understand why, we need to step away from biology for a moment and look at mathematics.
In the early twentieth century, the logician Kurt Gödel proved a result that shook the foundations of formal reasoning. He demonstrated that in any sufficiently powerful logical system, there exist true statements that cannot be proven within that system. No matter how complete the rules appear, there will always be truths that escape them.
Penrose’s insight was to take Gödel’s result seriously as a statement about the limits of computation.
A computer, at its core, is a formal system. It follows rules. It executes algorithms. No matter how sophisticated the machine becomes, it operates within a defined set of procedures. If Gödel is right, then there will always be truths that such a system cannot reach.
Yet human beings appear capable of recognizing those truths.
Mathematicians can see that certain statements must be true even when they cannot be derived from a formal system. That ability suggests that the human mind is not bound by the same limitations as an algorithm.
If that is correct, then the brain cannot simply be running a program. Something else must be happening.
Penrose does not claim to have a complete answer, but he proposes a direction. If the mind is not computational, then the explanation must lie in physical processes that are themselves not reducible to computation. In modern physics, almost everything we understand can be described algorithmically, with one notable exception: Quantum mechanics.
At the smallest scales of reality, physical systems do not behave like deterministic machines. They exist in superpositions of possibilities, and when measured, those possibilities resolve into a single outcome through a process that is still not fully understood. Penrose has suggested that this process, the collapse of the quantum state, may involve elements that cannot be captured by any algorithm.
If such non-computable processes exist in nature, and if the brain has access to them, then consciousness might arise from physics that lies deeper than computation.
This is where his collaboration with anesthesiologist Stuart Hameroff enters the picture. Together they proposed that microtubules inside neurons, long treated as mere structural components, might provide the environment necessary for these subtle physical processes to occur. Within these microscopic lattices, the brain may be doing something that resembles computation on the surface while participating in something fundamentally different beneath it.
The theory remains controversial, and many scientists remain skeptical. The brain is warm, noisy, and biologically complex, conditions that seem hostile to delicate quantum effects. Yet the history of science offers a caution here. Systems that appear chaotic at one scale often conceal extraordinary order at another. Structure protects subtlety. Organization allows fragile processes to persist.
Even if the details of Penrose’s proposal turn out to be incomplete or incorrect, the question he raises refuses to go away.
If the mind is not computational, then what exactly is it?
We have already seen that you are not your atoms. The matter in your body is constantly replaced, yet your identity persists. We have seen that the brain does not contain a single location where consciousness resides, but instead produces a dynamic pattern that unfolds across networks of activity.
Now we are forced to confront a deeper possibility. The pattern itself may not be something that can be reduced to a set of rules. If that is true, then the implications are difficult to ignore.
A perfect simulation of your brain, every neuron modeled, every synapse accounted for, might still fail to produce you. It might behave like you, speak like you, and respond like you, yet lack the one thing that matters: the experience of being.
Not because the simulation is incomplete, but because the process giving rise to consciousness is not something a simulation can capture.
This is where the problem becomes personal.
If you cannot be reduced to computation, then you are not just a machine made of biological parts. You are not simply the output of electrical signals moving through neural circuits. Whatever is generating your experience may depend on aspects of reality that lie deeper than the rules we use to describe physical systems.
And that leads to a question that is far more unsettling than it first appears.
If a machine could copy every visible aspect of your brain and still fail to produce your consciousness, then what exactly is the thing reading these words right now?
Not your neurons, not the electrical signals, not the chemistry, but something else.
Something that may depend on layers of reality we do not yet fully understand, operating beneath the level where computation ends. You are not a program running in your brain. You are whatever cannot be reduced to one.




It is noteworthy that within Godel's proof, in the Theory of Computation we easily see that when the machine encounters such an undecidable question, it will never stop calculating. It can calculate neither the existence of a proof, nor the non-existence of a proof. It will instead simply never halt. This is interesting for AI because probability arithmetic is subject to Godel's Theorem. There are many, many potentially undecidable questions put before the LLM and its calculator. The "AI solution" is to simply halt the machine by fiat after a certain number of cycles (or Dollars) and present the answer it is currently evaluating. This accounts for a great many hallucinations, lies, and ethical breaches that are spilling out of these machines. For these types of questions, they have absolutely no way to decide "I don't know." These AI calculators cannot do that, so the owners, developers, and promoters are also hallucinating, lying, and/or unethical when they tell us that Godel does not apply to their machines or to the "answers" they spew out.
We wonder if these machines are intelligent! Krishnamurti had an idea about Intelligence: He said it was the ability to know when we do not know something, and to be able to act on that knowledge. Godel does not apply to this kind of intelligence.
Thought this might interest you. https://imcaptainjack.substack.com/p/the-geometry-of-everything-the-orgin