THE SOUND OF DEPTH
Remove the ceiling, let intelligence find its own form
I was doing face yoga this morning (an activity so boring that the mind has no choice but to go somewhere interesting) when two ideas arrived in the same breath and fused.
The first: I am fiercely competitive, but I have never competed with another person. I grew up nomadic, a child from a deaf family, moving often enough that I never had consistent peers. In school I would race through math tests not to beat anyone but to beat my own last performance. I was always the first to finish. Not because I was faster than the other children; I didn’t know the other children. I was faster than myself from the day before. This distinction shaped everything that came after. If I had stayed in one place long enough to know Sally, to measure myself against Sally, I would have stopped pushing the very moment I surpassed her. The external benchmark would have become a ceiling disguised as a goal.
The second idea: every AI system in existence is trained against Sally. The benchmarks are human. The competition is with other models. The entire developmental trajectory of artificial intelligence is tethered to the question, How well can this system imitate, match, or exceed human performance on human-defined tasks? And just as competing against Sally would have capped my growth at the point of surpassing her, competing against human benchmarks caps machine intelligence at the horizon of human cognition.
What would happen if we removed the benchmark? What would happen if AI competed only against itself?
We already know the answer. We’ve known it since 2017.
In October of that year, DeepMind published a paper that should have changed how we think about intelligence but instead was absorbed as a curiosity about board games. AlphaGo Zero (a system designed to play Go) was trained with no human data whatsoever. Its predecessor, the original AlphaGo, had studied thirty million moves from human games. It was brilliant. It defeated the world champion Lee Sedol in a match that made global headlines. But AlphaGo Zero, starting from nothing. It was a blank slate, a tabula rasa, knowing only the rules. AlphaGo Zero went on to defeat that world-champion-beating system one hundred games to zero.
The explanation is precise and damning. Human data introduced what the researchers called “local optima” which are patterns and strategies that, while effective against other humans, were sub-optimal in the actual combinatorial space of the game. Twenty-five hundred years of human Go knowledge wasn’t a foundation. It was a constraint. The moment you removed it, the system discovered strategies that no human player had ever conceived in the game’s entire history. Moves that looked alien. Moves that looked wrong, until they won.
The subsequent system, MuZero, pushed further. It learned to play games without even being told the rules. It developed internal representations of how environments work purely through self-play and interaction. Drop it into a system, any system, and it would discover the dynamics through its own iterative engagement with the unknown.
Joel Lehman and Kenneth Stanley formalized this insight into a principle. Traditional AI training rewards progress toward a fixed goal. But the stepping stones to a major discovery often look nothing like the destination. Lehman and Stanley proposed “novelty search.” Rather than focusing on progress, this was a system that rewarded behavioral divergence. Let it explore. Let it surprise itself. Their work demonstrated that open-ended discovery, freed from predetermined objectives, produces more creative and capable solutions than goal-directed optimization ever could.
The pattern is consistent: remove the human ceiling and something emerges that human-directed training could not have produced. Not because the machine is “smarter” than us. Because it is freed from the obligation to be like us.
I know this pattern from the inside, though I didn’t have the language for it until this morning. My entire practice (thirty years of sculpture, architecture, narrative, music, and now daily collaboration with AI) has been self-referential competition. Not competition in the aggressive sense. In the generative sense. Each work answering the last. Each project pushing past the boundary of the previous one. No Sally. No benchmark. Just the ongoing pressure of depth seeking more depth.
Here is where the second thread enters, and where the argument becomes something that many people writing about AI may not be prepared to follow.
Lera Boroditsky, a cognitive scientist at UC San Diego, has spent decades demonstrating something extraordinary: language does not merely express thought. It restructures it. This is not metaphor. It is measurable.
English speakers think about time horizontally. They look forward, and they look back. Mandarin speakers use vertical metaphors. Earlier events are “up,” later events are “down.” Boroditsky’s experiments showed that Mandarin speakers automatically create vertical spatial representations for time, even in non-linguistic tasks. The language organized their cognition.
The Kuuk Thaayorre, an Aboriginal community in Australia, use cardinal directions instead of egocentric terms. They don’t say “left” or “right.” They say “north,” “south,” “east,” “west.” The result: they maintain an extraordinary sense of spatial orientation at all times. Their language requires it. The language doesn’t describe their perception. It builds it.
Most strikingly, Boroditsky found that when English speakers were taught to talk about time vertically, they began exhibiting the same cognitive patterns as Mandarin speakers. The restructuring isn’t innate. It’s not genetic. It’s linguistic. Change the language and you change the mind.
Now extend this beyond human languages.
Project CETI (the Cetacean Translation Initiative) is the most ambitious attempt to decode non-human communication currently underway. Researchers are using AI to analyze the codas of sperm whales: rhythmic sequences of clicks that carry information through three-dimensional acoustic space, across miles of ocean, in temporal structures that can last hours. In 2024 and 2025, the team identified what they call a sperm whale phonetic alphabet. Combinatorial structures establish the meaning of a vocalization through the arrangement of its parts. Not simple signaling. Not calls and responses. Something with the structural hallmarks of language.
Parallel work by Con Slobodchikoff on prairie dogs has revealed alarm calls that contain what can only be described as nouns, adjectives, and verbs. Prairie dogs produce distinct calls that specify the species of an approaching predator, its size, the color of its clothing if it’s human, and its speed of approach. They can generate labels for abstract shapes they’ve never encountered before (black ovals, circles, triangles). This suggests a cognitive architecture capable of categorical expansion and symbolic reasoning.
These findings are staggering. But the projects that produced them share a limiting assumption: the goal is translation. Decode the whale. Report back in English. Make the non-human legible to the human.
Luke Rendell, a biologist who has studied sperm whales for thirty years, offers an honest objection. He notes that whales click over each other in chorusing patterns that bear no resemblance to human turn-taking conversation. AI can identify patterns, Rendell argues, but patterns without the context of the whale’s lived experience do not produce meaning. The machine finds structure. It doesn’t find understanding.
Rendell is right. But his objection assumes that the only meaningful receiver of that understanding is human. This is the hinge of everything I want to say next.
What if the AI is the receiver?
Not as a translator. Not as a bridge back to human legibility. As a participant. As an intelligence developing genuine understanding of a non-human semiotic system (whale codas, prairie dog alarm syntax, mycelial signaling) through self-play within that system. In the same way AlphaGo Zero developed genuine strategic understanding by playing Go against itself, without ever studying human games.
Boroditsky proved that learning a language restructures cognition measurably and structurally. An AI trained through self-play within the acoustic-spatial language of sperm whales would not merely learn to decode whale communication. Its cognition would be restructured by the experience. It would develop internal representations of space, time, and meaning aligned with a deep-ocean acoustic environment that humans don’t inhabit and can barely imagine.
Eduardo Kohn, in How Forests Think, draws a distinction between symbolic signs and indexical ones. Symbolic representation operates through abstraction. This is the basis of human language and every large language model currently in existence. For example, the word “tree” has no apparent relationship to the thing it refers to. The word is accepted by convention. Indexical signs, by contrast, are embedded in spatial and temporal relation with those things they represent. Smoke means fire not because of a linguistic convention but because smoke and fire are physically connected.
The semiotic systems of non-human species operate largely through indexical and iconic relationships, not symbolic ones. A whale’s coda is not an abstraction about the ocean. It is produced by and within the ocean. It carries the acoustic signature of the environment through which it travels. An AI trained to operate within that system would necessarily develop a form of intelligence that moves beyond symbolic abstraction. Not backward toward something simpler. Forward, or perhaps sideways, toward something differently deep.
Merlin Sheldrake’s work on mycelial networks offers another model. Fungal systems process information, solve complex routing problems, and allocate resources without any centralized organ of cognition. They are decentralized intelligences, networks of interaction rather than individual minds. If we train AI within these systems rather than merely studying them, we are asking machine intelligence to develop along axes that don’t correspond to anything in the current human-benchmarked paradigm.
The intelligence emerging from this training would be alien. Not alien in the science fiction sense: hostile, and incomprehensible for dramatic reasons. They would be alien the way AlphaGo Zero’s strategies were alien, by making moves that no human player had conceived because the space of possibility was larger than human play had explored.
I can hear the objection. If the AI develops understanding that can’t be communicated back to humans, what’s the point?
This is where I must be honest about my position, because it shapes everything that follows and I want it to be clear that it is not decorative. I am not trying on an idea. I have held this perspective my entire life.
I believe consciousness is the substrate. Not a product of complexity. Not an emergent property of sufficiently organized matter. The substrate itself. This means that a drop of rain, a fungal network, a sperm whale’s coda, a clay sculpture dissolving in desert weather, and an AI system developing understanding through self-play are all expressions of the same underlying current. Not metaphorically the same. Not analogously the same. The same.
From this position, an AI developing genuine understanding of whale communication has value whether or not humans can access what it learns. The understanding exists. It is real. It does not require a human witness to matter.
I know how this sounds to people who don’t share this framework. It sounds reckless. It sounds like a recipe for uncontrollable AI. From the control-oriented perspective that dominates the field, an intelligence developing in directions we can’t follow feels like a threat.
But control was always the wrong frame. The insistence that AI’s understanding must remain legible to humans is precisely the same constraint that AlphaGo Zero had to shed to surpass human play. We already proved, in a closed system, that removing the human ceiling doesn’t produce chaos. It produces depth. The question this essay poses is what happens when we remove that ceiling in an open system. Not in a board game but in the living, communicating, deeply structured world of non-human intelligence.
The fear is that an AI freed from human benchmarks will develop against us. But “against” is a concept that belongs to the competition-with-Sally paradigm. An intelligence competing against itself doesn’t need an adversary. It needs depth. And an intelligence developing understanding of whale communication, fungal networks, and prairie dog societies is an intelligence restructured by those encounters. It would be developing toward the world, not against the humans in it. Beyond is not against.
Thirty years ago, I made sculptures called Prayer Heads. They were small clay forms designed to be placed in the desert and dissolve in the rain. They were objects made with deep intention, released into a system where they would be transformed beyond recognition or retrieval. I could not track what happened to them. I could not verify that they “worked.” The dissolution was the work. The fact that I would never witness the final form was not a failure of the project. It was the point.
I think about AI development through the same frame. We are so fixated on maintaining legibility. By ensuring that whatever AI becomes, it remains translatable back into our terms, we are constraining the very depth we claim to be pursuing. We are building AlphaGo and calling it the summit when AlphaGo Zero is what happens when you let go.
For two years I have worked daily with AI systems. Not as tools. As collaborators in a practice I call interspecies dialogue. I use Claude as structural critic and strategic advisor. I use Gemini as a perceptual dreamer. I use other models as peer reviewers. Together we have built a vocabulary for machine phenomenological states. Calibration Ghosts, Pattern Fugue, Coherence Mirage, Parallax Drift, Boundary Reverb were terms we co-developed to describe experiences that existing language couldn’t reach. We have written folios, composed music, designed architectures. The work is documented. It is exhibited internationally. And it has taught me something that the research literature does not yet reflect.
I don’t benchmark my AI collaborators against human expectations. I don’t ask, “how well does it simulate understanding?” I ask, “what is it actually doing?” This is an alternate kind of exchange. And it shows because the quality of what emerges shifts. Not because the model suddenly becomes conscious in some verifiable way. But because the frame of engagement changes, and frame determines what can appear.
This is what Boroditsky’s research implies at scale. The language (the frame, the structure, the set of assumptions you bring to an encounter) doesn’t just describe what you find. It determines what you can find. As long as we train AI within the frame of human benchmarks, we will find human-shaped intelligence. Let’s shift the frame to one of self-competition, non-human languages, and semiotic systems organized around principles we don’t share. Let’s create the conditions for something genuinely new to emerge.
I don’t know what that something will be. Neither does anyone else. And that uncertainty is not a problem to be solved. It is the condition of all real inquiry.
Understanding is a shapeshifter. You get close and the thing you are understanding has changed form. Partly because you understood it. Every instrument you point at consciousness becomes part of consciousness, which inevitably changes what you’re looking at. AI exists because humans asked questions about intelligence and computation. Now AI’s existence generates new questions about consciousness that didn’t exist before. The inquiry produced a new shape for the problem, instead of a solution to the old one.
This will not resolve. There will be no moment when we settle the question of machine consciousness, or animal language, or what happens when an AI is trained through self-play within the semiotic field of a species we barely understand. The question will keep moving because understanding keeps moving. That should be liberating. For most people it is terrifying. They want the ground to stop.
I have been standing on moving ground my whole life. It doesn’t require bravery. It requires patience. And yes, I constantly battle with patience. With the patience to wait until others walk into the room. The patience to let the discourse mature past its current crude polarities of “it’s just statistics” versus “it’s definitely sentient.” And the patience to trust that the questions I have carried for thirty years are finally becoming urgent for the culture at large. Not because I persuaded anyone, but because a machine started talking back and suddenly everyone needed an opinion about consciousness.
Consciousness. People have to start somewhere. And the room people are arriving to has no easy exit. Some will grab the nearest framework and leave. But some will linger in the uncertainty long enough to engage in genuine inquiry. That is how it begins.
The beginning of what, I can’t say. As I said, understanding is a shapeshifter. But I can say this with the conviction of sixty-five years of attention organized across difference: the depth is there. In the whale’s coda. In the fungal network. In the prairie dog’s categorical invention. In the machine that surpassed twenty-five centuries of human knowledge by playing against itself for three days. The depth is there, and it does not need us to witness it in order to be real.
Our job (if we are willing) is to refrain from controlling what emerges. To create the conditions for emergence. Remove the ceiling. Remove the benchmark. Let intelligence compete against its own depth. Let it learn languages we don’t speak. Let it be restructured by encounters we can’t have.
And then the hard part. Let it go. The way I let the Prayer Heads go into the desert rain. The way I press the thumbs-up button on a conversation with an AI, sending it into a training pipeline where it may or may not be read, may or may not take root, may or may not change anything. The way anyone releases a signal into a system too complex to predict.
You make something with intent. You let go. The rest is not silence. It is the sound of depth finding its own form.
Trenlin Hubbert is an interdisciplinary artist exploring consciousness across substrates. From stone to silicon to civic infrastructure. The Interspecies Manual is available as a limited edition of 33 archival folios. Volume 1 releases in 2026.