The World You See Isn't What You Think
Part 5 of the Core Concepts series Physical reality as interface, why brains don't generate consciousness, and what that unmanned probe really does.
We’ve spent four articles dismantling the scaffolding most people use to think about reality. Space isn’t the container. Time isn’t the medium. The physical world isn’t the substrate.
We ended Part 4 with three questions: if the physical world isn’t bedrock, what is it? What are brains doing, if not generating consciousness? And what happens when a recording carries something from one conscious aperture to another?
I’m not saying the physical world is fake. I’m not saying physics is wrong. I’m not saying your coffee table doesn’t exist. Every empirical finding in physics, chemistry, neuroscience — all of it stands. The equations work. The predictions hold. The regularities are genuine.
What I’m saying is weirder than “it’s all an illusion.” I’m saying: the physical world is real, it’s stable, it has genuine patterns — and it’s not what you think it is.
The Interface
Donald Hoffman has a metaphor I want to borrow, because he explains it well and deserves the credit.
Think about your computer desktop. You see icons — a file folder, a trash can, a document. You drag the document to the trash and it disappears. The desktop is real in the sense that it’s genuinely there, you can interact with it, and the interactions have consistent results. But nobody thinks the little folder icon is the file. Nobody thinks the blue rectangle on the screen is the document itself. The desktop is an interface — a stable, useful rendering that lets you interact with underlying processes without needing to understand the voltage states in memory chips.
Hoffman’s point is that perception works the same way. We don’t see reality as it fundamentally is. We see a rendering — an interface — that’s stable and useful but not identical with what’s actually going on underneath.
TNT agrees with Hoffman on this. Physical reality is not fundamental ontology. It’s an experiential interface arising from consistent patterns of actualization. Where TNT parts ways with Hoffman is on what grounds the interface. Hoffman grounds it in evolution — fitness beats truth, organisms evolve perceptions that help them survive rather than perceptions that show them what’s real. But that explanation has a circularity problem: fitness is defined in terms of physical environments, organisms, and reproductive success — all of which are, on the interface view, themselves part of the interface. You can’t explain the interface by appealing to things that only exist within it.
TNT avoids this. The interface is grounded in Awareness and coherence constraints. The global coherence boundary, B₀, defines which potentials are actualizable — think of trying to cram a square peg into a round hole. If a potential doesn’t fit through B₀, it can’t become experience. Not because anything is blocking it, but because it was never coherent in the first place. Physical regularities — the “laws of physics” — are expressions of those constraints. They hold not by brute contingency but because anything that violates them isn’t coherent. The interface has the patterns it does because of coherence constraints, not because of evolutionary pressure operating within the interface itself.
Now I want to push Hoffman’s metaphor further, because the desktop version is missing something important.
Think about a multiplayer video game instead. An MMO, or a first-person shooter — any game where multiple players share a world. You and I can stand in the same spot in the game and see the same building. The game world has consistent rules, stable geography, reliable physics. If I push a boulder, you see it move. It feels like we’re in the same world.
But we’re not sharing a screen. I’m rendering the game on my machine. You’re rendering it on yours. We each experience the world through our own interface. The reason the world appears consistent between us isn’t that we’re accessing some mind-independent game reality — it’s that we’re both constrained by the same underlying rules. Same server state. Same physics engine. Same constraints.
That’s what TNT means when it says different conscious apertures share constraints. We experience the “same” world because we actualize within the same coherence constraints — B₀ is common, and the accumulated state of all actualizations conditions Bµ for everyone. But each conscious aperture actualizes its own experience. I don’t access your rendering. You don’t access mine. The intersubjective consistency is real — and it’s a consequence of shared constraints, not shared access to a mind-independent physical reality.
Now, when I say the physical world isn’t “bedrock,” I don’t mean it’s fake. Think about how physicists describe solid objects. Your kitchen table feels solid. It looks solid. You can bang your knee on it and it’ll hurt. But at the particle level, that table is an ungodly number of particles vibrating in such a fashion and in such proximity to each other as to appear solid. The solidity is real — you can set your coffee on it and it won’t fall through — but it isn’t what you think it is at the fundamental level. The interface claim is like that. Physical reality is real. It has genuine patterns and genuine consequences. It just isn’t the bottom layer of what exists.
This is neither “only my mind exists” nor “I’m seeing raw, unfiltered reality.” The interface is real. It constrains what can be actualized. Its patterns are genuine and stable. But it’s the experiential surface of something deeper — Awareness, coherence constraints, and the act of selection by which potential becomes actual.
What Brains Actually Do
If physical reality is an interface, then brains are part of that interface.
This is where it gets uncomfortable for most people. We’ve been told — by neuroscience, by popular science, by the entire intellectual culture of the last century — that brains generate consciousness. Neural activity produces experience. Damage the brain, damage the mind. The correlation is so tight, so reliable, so well-documented that it feels like the case is closed.
And here’s the thing: every one of those correlations is real.
TNT doesn’t deny neural correlates of consciousness. It reinterprets what they are. In TNT’s terms, neural correlates are co-constrained actualizations. When you have a visual experience, there is a corresponding pattern of neural activity. But the neural activity doesn’t cause the experience. Both — the experience and the neural pattern — are actualizations constrained by the same coherence conditions. The experience is a conscious aperture selecting from accessible potential. The neural pattern is a feature of the interface through which that selection occurs. They’re correlated because they’re co-constrained, not because one produces the other.
Think of it this way. When you move a file on your desktop, the icon moves and the underlying data changes. The icon movement doesn’t cause the data change. The data change doesn’t cause the icon movement. Both reflect the same operation, seen at different levels. The correlation is perfect — and it’s not causal.
Brain damage disrupts experience not because brains generate consciousness but because it alters the interface configuration through which a conscious aperture accesses potential. Damage the interface, you change what’s accessible. The science is preserved. Only the philosophy changes.
But here’s where it gets interesting — because there are cases where the “brain generates consciousness” story struggles, and where the interface reinterpretation handles them cleanly.
The coma patient who remembers everything.
There are documented cases of people in deep coma — minimal brain activity, no behavioral responsiveness, every clinical indicator suggesting “nobody’s home” — who, upon waking, report detailed experiences from the period of unconsciousness. Some report conversations that happened in their hospital room. Some report experiences with no external correlate at all.
On the standard view, this is baffling. If the brain generates consciousness and the brain is barely functioning, where did the experience come from? Where were the “memories” stored if the neural architecture for memory consolidation was offline?
On the TNT view, it’s not baffling at all. The biological interface was severely degraded — reduced to a pinprick. But a conscious aperture doesn’t need a fully operational interface to select. It needs some access to potential. The interface narrowed dramatically, but the aperture didn’t close. The Cᵢ was still selecting, still actualizing, still writing to Memory. Not “brain memory” — ontological Memory, the structured accumulation of all actualizations that persists regardless of neural architecture. The person recalls because actualizations occurred and were retained. The science sees minimal brain activity and can’t explain the recall. TNT sees a degraded interface with an aperture still operating through it.
Deep anesthesia.
Contrast this with deep anesthesia — not light sedation, not twilight states, but the full pharmacological suppression used in major surgery. Under deep anesthesia, there is no recall. Not “fuzzy recall” or “fragmentary recall” — nothing. The person goes under, and the next moment of experience is waking up. There is no subjective duration for the intervening period.
On the TNT view, this is a case where the Cᵢ genuinely isn’t selecting through this interface. Not “can’t remember the selections” — wasn’t making them. The interface condition under deep anesthesia is one in which selection isn’t occurring through it.
But — and here’s what matters — actualizations don’t stop happening around the anesthetized person. The surgical team is acting, decisions are being made, the world is proceeding. All of those actualizations contribute to Memory and condition Bµ. So when the person’s Cᵢ resumes selecting, their AccessibleTᵤ has shifted. The world moved. Not because “time passed” in some container sense — Part 3 already showed that time is induced, not fundamental — but because the accumulated state of actualizations, including ones they weren’t party to, has altered the coherence constraints they now select within.
(A quick aside: you may have heard stories of people experiencing awareness during surgery — witnessing their own operations from above, recalling conversations between surgeons. These aren’t cases of consciousness emerging from nowhere. They’re cases where the pharmacological suppression was incomplete — the interface wasn’t fully closed, and the aperture was still operating through it. They’re the coma-with-recall pattern, not a counterexample to it.)
Now set these two cases side by side. If consciousness is what brains produce, both should look the same — reduced brain function, no consciousness, no memory. But they don’t look the same. The difference maps precisely onto the TNT distinction: degraded interface with Cᵢ still selecting (coma with recall) versus interface condition under which Cᵢ isn’t selecting through it (deep anesthesia). The standard view has to treat these as anomalies. TNT predicts exactly this pattern.
Split-brain.
One more case. When the corpus callosum — the bundle of fibers connecting the brain’s hemispheres — is severed, something remarkable happens. But it’s not one remarkable thing. It’s two.
Some split-brain patients behave as though they have two independent streams of processing that nonetheless belong to a single experiential subject. The left hand and the right hand act independently, but there’s a unified experiencer who registers the conflict.
Others behave as though there are two genuinely different people. Different preferences. Different responses. Different personalities operating through the two hemispheres.
The standard view has no clean way to handle this. If the brain generates one consciousness, how does severing a fiber bundle sometimes yield two? And why does it sometimes create two and sometimes not?
TNT has resources here, because TNT doesn’t tie one conscious aperture to one body. The body is interface. Nothing in the framework says one body can only have one Cᵢ-interface combination. It’s a question of interface configuration, not biological organism.
In some cases, what you get is one conscious aperture with a bifurcated interface — one Cᵢ, two interface channels. One selector, two access points. This looks like independent operation with a unified subject.
In other cases, what you get is two conscious apertures, each with their own interface — two Cᵢ, two separate loci of selection, two trajectories being written to Memory. This looks like two genuinely different people.
Alter the interface, you alter the conditions of the Cᵢ-interface pairing. The standard view can’t even frame this distinction. TNT can — and the distinction maps to what clinicians actually observe. We’ll explore this in more depth in a future piece, but the point for now is straightforward: the brain is interface, not generator. Change the interface configuration, you change the conditions of experience. You don’t “split” consciousness, because consciousness was never produced by the thing you cut.
What That Unmanned Probe Does
Here’s something you’ve probably never thought about.
There’s a space probe on its way to an exoplanet — a world orbiting a distant star that we can barely resolve as a point of light. No telescope has ever shown us its surface. No conscious aperture has ever actualized experience of what’s there. The probe was designed and built by people — conscious apertures who engineered its instruments, programmed its sensors, aimed it at a destination no one has seen. The probe arrives. Its instruments capture data — spectrographic readings, surface images, atmospheric composition. The data is written to the probe’s own storage, out there, in the void.
Maybe the probe makes it back and a scientist decodes the data. Maybe it doesn’t. Maybe it’s pulled into a gravity well and destroyed, and nobody ever sees what it captured.
Here’s the question: before anyone decodes that data, what is it?
In TNT, it’s a recorded structure — Tᵣ. A subset of coherent potential that has been captured and locked via a recording process. The probe’s instruments identified patterns within the interface and preserved them — held them in stasis. The data on that server has no experiential content. It has no semantic meaning. It is a locked configuration of potential, available for actualization by a Cᵢ if and when one decodes it. Until then, it’s inert.
This is the key distinction: the data becomes information only when a conscious aperture decodes it. Before that, it’s pattern. Not “information nobody’s read yet” — not information at all. Information is interpretive, not intrinsic. It exists only in the act of decoding by a Cᵢ.
And if the probe does make it back, and a scientist does open the data, pull up the spectrographic readings, and actualize an experience from them — that experience belongs to the scientist. It’s not a replay of “what’s there” on that distant world. It’s a new actualization, constrained by the recorded pattern, produced by the scientist’s own Cᵢ. The probe captured potential from a place no conscious aperture had ever accessed. The scientist actualizes experience. These are different events.
This requires a distinction we haven’t introduced yet: the difference between a conscious aperture and a system that has no aperture at all.
Every Cᵢ — every conscious aperture — can in principle decode recorded structures. That’s part of what it means to be a locus of selection. But Cᵢ admits degrees of aperture. A narrow aperture supports basic experiential access. A wider aperture supports richer actualization — including the capacity to form semantic, symbolic, and abstract interpretations. A dog actualizes experience but isn't decoding spectrographic data. A human with significant cognitive limitations still makes genuine choices — still has free will, still has a conscious aperture — but the interpretive range available through that interface is narrower. The scientist pulls apart atmospheric composition readings and infers habitability. The difference isn't whether they're conscious. It's how wide the aperture opens.
The hard line isn’t between kinds of Cᵢ. It’s between Cᵢ and Non-Cᵢ — between systems that have a conscious aperture and systems that don’t. And what draws that line is undetermined selection. A Cᵢ is constituted by its first free choice. No choice, no aperture. That’s the boundary.
Now notice something about the probe. It’s a Non-Cᵢ system — it has no conscious aperture. It can capture and store Tᵣ, but it can’t actualize experience and it can’t decode what it’s captured. Yet it can record, because its recording capacity traces to Cᵢ agency. People designed it. People built the instruments. People wrote the code. The capacity for recording doesn’t originate with the probe — it originates with the conscious apertures who created it.
This is true of every recording device. Cameras, sensors, seismographs, radio telescopes — all Non-Cᵢ systems that capture Tᵣ because their recording capacity traces to Cᵢ design. They execute a physical process that results in recorded structure, but the capacity for that process is never self-originating. Pull the chain far enough and you always find a Cᵢ at the origin.
Which raises the obvious contrast: what about patterns that nobody initiated?
The growth rings of a tree. The stratification of rock. The cosmic microwave background. These are patterns within the interface, arising from coherence constraints playing out — Bµ evolving, potentials resolving. But nobody designed a process to capture them. No Cᵢ initiated their recording. They aren’t recorded structures. They’re features of how the interface develops.
A Cᵢ can interpret them — read the tree rings as climate history, read the rock layers as geological sequence, read the cosmic background as evidence of early conditions. That interpretive act confers meaning. But it doesn’t retroactively make them recordings. The meaning is conferred by the interpreter. The pattern was always there. The information wasn’t.
There’s a case from physics that sharpens this point. In delayed-choice experiments — and especially the quantum eraser — physicists set up a situation where a particle’s behavior appears to depend on whether a “recording” of its path will be available for decoding. If the which-path information is preserved, the particle behaves one way. If it’s erased without anyone decoding it, the particle behaves as though the recording never existed.
The standard interpretation agonizes over this. How can a future decision about whether to erase data affect past particle behavior?
TNT doesn’t agonize. Time isn’t fundamental — “past” and “future” are interface-level descriptions, and there is no temporally extended trajectory to retroactively alter. Each actualization occurs at the Now. But notice what the experiment reveals about information itself: a physical pattern — one produced by a designed experiment, a genuine recorded structure — has no informational status until a Cᵢ decodes it. Erase the pattern without decoding, and it’s as though the information never existed. Because it didn’t. The pattern existed. The information requires an interpreter.
We’ve talked about what happens when the interface degrades — coma, anesthesia, split-brain. We’ve talked about how interface conditions shape what a conscious aperture can access. But we haven’t asked what happens when the interface doesn’t just degrade. When it terminates. When the biological system that configured a conscious aperture’s access to potential ceases to function entirely.
We haven’t talked about death.
And when we do, we’ll find that the framework is remarkably honest — both about what it can say, and about what it deliberately refuses to.
Next: “What Death Doesn’t Erase” — what the framework says about termination, what it refuses to say, and why that honesty matters.

