Category: Materialization technologies
Why we still don’t understand the dolphin language
“To decipher means to understand. To understand means to hear. To hear means to speak the same language”
Ph.D. in Physical and Mathematical Sciences at the Technische Universität München
Gerda Ponzel
It is time to dive into the ocean, where dolphins—whose evolution began more than 50 million years ago—create a symphony of whistles, clicks, and pulsed sounds. Their “language” resembles a code, no less mysterious than the Voynich manuscript, which humanity has not yet deciphered. In ancient myths, dolphins saved sailors, were seen as messengers of the gods, and even today their intelligence astonishes: they imitate human signals or fall silent, making a decision—should they answer us? Perhaps they already understand us, while we, blinded by logic, are looking in the wrong place. Our attempts to find words and grammar in their sounds while ignoring gestures and the three-dimensional environment of the ocean may be a mistake.
This article will take you into the acoustics, biophysics, and evolution of dolphin communication to understand why their code slips away from us—and how technologies, from spectrograms to AI, may hold the key to this riddle.
Historical Approaches to Deciphering: The Limitations of the Past
In the 1960s, when humanity dreamed of the stars, scientists like John Lilly believed that dolphins could be the missing link for decoding an extraterrestrial-like language under the sea. Lilly was convinced that their whistles were a language similar to ours, and he tried to teach dolphins English, recording their sounds on primitive tape recorders. Sounds like the plot of a science-fiction novel? Indeed it was—his ideas were ahead of their time, but technology and methodology lagged behind. The hydrophones of that era could not capture dolphin ultrasound, and the noise of water turned the recordings into chaos. Scientists searched for familiar words and syntax in the sounds, never considering that dolphins might “speak” with gestures or patterns inaccessible to human logic.

And what if, all this time, the dolphins were also watching us, wondering: “When will these humans stop trying to find themselves in us?”
Today, with digital spectrograms and machine learning, we look back at those failed attempts as the first steps in cracking a cipher that still teases us with its complexity.
Those mistakes, though naive, showed that deciphering the dolphin code requires not only enthusiasm but also tools that simply did not exist at the time. And, just as importantly—a completely different perspective on the very question of decoding their messages.
The Acoustic Foundation: A Key to the Underwater Code
Let’s close our eyes for a moment and imagine standing at the edge of the ocean, where from the depths comes a melody—a hidden message wrapped in sound, something you could listen to endlessly, guessing at familiar timbres and intonations. But if this sound code seems so clear, why does it constantly slip away from us? Because dolphins, true masters of acoustic alchemy, create signals that we are only beginning to grasp.

Their sound palette is divided into three parts: whistles, clicks, and pulsed sounds. A whistle is like a unique signature, a signal by which dolphins recognize each other in the water column. A 2016 study from the University of Western Australia showed that each dolphin has its own “name whistle,” which can travel for many kilometers like a radio signal. Clicks are their sonar—a precise tool that maps the underwater world. And pulsed sounds are emotional outbursts, resembling either excitement or argument, sounding like the slap of a tail on water. It is precisely these pulsed sounds that humans have been trying for many years to turn into understandable text.
Scientists use hydrophones to capture these sounds and turn them into spectrograms—colorful patterns where each whistle and click becomes visible. For example, the Woods Hole Oceanographic Institution discovered that dolphins change the frequency and rhythm of their whistles, creating complex sequences resembling musical phrases: one signal may last fractions of a second, another several seconds, each carrying its own meaning. But their acoustics are not our world. Their sounds are designed for water, where signals travel five times faster than in air, bending around reefs and schools of fish. Dolphins use frequencies up to 150 kHz—ultrasound that the human ear cannot perceive without special technology.
Why Is the Dolphin Language So Complex?
Here’s where the intrigue begins: dolphins use their language not only for communication, but also to create acoustic holograms. Their clicks focus like a spotlight to “scan” prey, while their whistles have a wide range, reaching companions across vast distances. This is a three-dimensional code that lives within the movement of water. But there is another detail that confuses scientists: dolphins can imitate. In experiments, they copied human whistles and even computer signals with frightening precision, as if teasing: “We understand you, but you don’t understand us.” Could it be that they already know our language, while we are still fumbling with their cipher?

There is another clue: dolphin language is not just sound. Leaps, fin touches, synchronized movements—all of these are woven into their messages. Researchers from the University of St Andrews noticed that dolphins combine sounds with gestures, like choreographers creating a dance. We, fixated on acoustics, see only part of the picture. Perhaps this is the main mistake: we try to decode their language by breaking it into frequencies and spectrograms, while dolphins speak with their entire bodies—using the tip of a fin and the rhythm of their breathing. Maybe their true “words” are not sound waves at all, but instantaneous bursts of motion, understandable only to those who think in the three-dimensional space of the ocean.
The Biophysics of Sound: How Dolphins “Construct” Language
For decades, scientists have been trying to unravel the mystery of dolphin intelligence, but the closer they get, the clearer it becomes that they are dealing not with simple mammals, but with another form of intelligence that has evolved in parallel with humans. Their brain resembles a perfect biological computer, tuned to frequencies inaccessible to human perception. While skilled engineers struggle to build sonar devices, dolphins use their gift to its fullest—their brain is simultaneously a transmitter, decoder, and musical instrument. They literally think in sound, transforming neural impulses into multidimensional acoustic images.

The nasal passages of dolphins are a delicate and highly complex mechanism that turns air into acoustic arrows. Studies published in the Journal of the Acoustical Society of America show that dolphins, with jeweler-like precision, create vibrations capable of stirring an entire ocean. Their language is fluid, multidimensional, permeated with movement—yet we persist in forcing it into the dry schemes of our laboratories.
In experiments, dolphins eagerly play our game: they catch signals, respond, even imitate—but in their eyes there is irony, as if they know something we will never understand. It is no coincidence that ancient sailors saw them as messengers of Poseidon. Modern science confirms it: dolphins compose virtuosic music out of pulses and silences, where every gesture, every click is a stanza in the epic song of the ocean.
Behavioral Context: The Key to Understanding Dolphin Language
Dolphins are not just ocean dwellers but virtuosos of underwater diplomacy. We humans try to crack their unique “code,” but without understanding their social roles, it’s like reading a book in an unfamiliar language. Why is behavior the key we still haven’t found?
In 2017, scientists from the University of Western Australia studied dolphins in Shark Bay. During hunts, they emitted rapid clicks synchronized with their dives, like generals issuing orders. During rest, the same dolphins switched to prolonged whistles, as if exchanging gossip. A study published in Proceedings of the Royal Society B showed that signals depend on context—hunting or resting dictates their rhythm and structure. Without observing their behavior, we catch only echoes of their conversations.

Dolphins are communication geniuses. In 2021, Animal Cognition described how female dolphins in Cardigan Bay use fin touches to strengthen group bonds, accompanied by soft sounds. It’s like a handshake that carries meaning only in the right context. But our technologies—hydrophones and cameras—often fail to keep up with their rapid gestures, leaving us with half the puzzle.
Another mystery: dolphins imitate us. As Diana Reiss showed in her experiments, bottlenose dolphins copy artificial signals, including computer sounds, with metronome-like accuracy, as if hinting: “We hear you.” Their choice to respond or ignore depends on intention, and their language is not just sound but a social ritual—where we are still spectators, not participants.
Time-Series Analysis: Searching for Structure in Signals
Back in the 1960s, neurophysiologist John Lilly first proposed the radical hypothesis that dolphins do not operate with discrete symbols but with continuous acoustic images. His ideas sounded like science fiction, but modern work by Dr. Denise Herzing’s team at Florida Atlantic University confirmed this: their signals indeed combine frequencies, time intervals, and spatial dynamics into a single “data packet.”
Professor Ted Starmer of the University of San Diego proved that a single dolphin click can carry up to 20 bits of information—like compressing an entire sentence into one impulse. The problem is that the human brain evolved to process sequential speech, while the dolphin brain can instantly “read” multidimensional sound patterns.
But how do we move from detecting these structures to understanding them? Here, science hits a fundamental barrier: the methods available to humans are like trying to decipher poetry by studying only the statistics of letters. Researchers have even tried using time-series analysis to decode dolphin messages. Time-series analysis is like an X-ray for invisible patterns: it breaks signals into temporal and frequency layers, searching for repetitions and rhythms. In 2020, scientists from the University of Western Australia applied spectral analysis to bottlenose dolphin recordings, discovering that their signals form complex sequences with distinct frequency bursts resembling digital data packets.

In 2022, a team of marine biologists led by Dr. Stephanie King at the University of Bristol tried to link acoustic patterns with dolphin behavior. They found that the same sound could mean the opposite in different contexts. And that’s not even the hardest part. Dolphin speech sequences are fluid like water. They switch rhythms faster than human-made algorithms can track.
Technologies like wavelet analysis allow researchers to break signals into microfragments, revealing hidden patterns. A 2023 study published in Frontiers in Marine Science showed that such methods can capture subtle frequency shifts that may carry information about dolphin intentions. But the ocean itself interfered—currents, echoes from rocks, and the roar of passing ships turned the voices of these oceanic intellectuals into noise.
The Wild Dolphin Project applies adaptive algorithms to filter this chaos, but even clean data is only part of the puzzle. Without massive computing resources and vast recording datasets, scientists are like codebreakers trying to crack a cipher from fragments of intercepted radio signals.
Machine Learning for Signal Classification: Betting on AI in the Underwater Puzzle
Dolphins transmit information through complex acoustic patterns, where each signal combines frequency, amplitude, and temporal code—something like a natural analogue of digital modulation. Modern machine learning algorithms (for example, those used in the CETI project) analyze up to 50 parameters of sound simultaneously, but face a paradox: the more data they collect, the clearer it becomes that dolphin communication does not fit human linguistic frameworks. As Dr. Diana Reiss of Hunter College notes, the problem is not technology but a fundamental cognitive gap: “We search for words, while their ‘language’ may be closer to musical improvisation or neural activity.”

To capture the ever-changing essence of dolphin communication, science turns to methods that imitate the work of neurons themselves—artificial neural networks. In 2022, scientists from the University of California, San Diego trained algorithms to distinguish bottlenose dolphin signals, classifying them into types: from greeting calls to echolocation. The results were impressively accurate, but they represent only the tip of the iceberg. Dolphin code is as changeable as ocean currents, and algorithms cannot keep pace with its shifts.
In 2025, Google introduced the DolphinGemma model, developed jointly with Georgia Tech and the Wild Dolphin Project, to analyze bottlenose dolphin vocalizations. Neural networks learned to identify unique acoustic patterns and “signature” whistles used by dolphins for identification. However, without linking these signals to dolphin behavior and social context, their true meaning remains a mystery.
Dolphins do not give us easy answers. Scientists from various universities are experimenting with sensors to track dolphin movements and connect signals with actions, but it’s like assembling a mosaic without knowing the final picture. Machine learning is certainly a trump card, yet the dolphin mind remains unconquered.
The Future of Research: Plans, Hopes, and Possibilities
We stand on the threshold of a revolution in understanding not just the human mind but any form of intelligence—and dolphins hold the key to that door. Everything we have learned over 60 years of research boils down to one insight: the language spoken by the ocean’s most ancient communicators is less like a code to be cracked and more like a mirror, reflecting the limits of our own thinking.
Technology has given us unprecedented tools: neural networks capable of processing millions of signals in search of hidden patterns; sensors that capture the slightest changes in the behavior of animals and marine life; cognitive models that surpass human intuition. But the true breakthrough will come only when we learn to think as they do. As neuroscientist Lori Marino from Emory University notes: “Their intelligence has evolved in the aquatic environment for 50 million years; perhaps it is we who need to ‘dive’ into their world, rather than trying to drag them onto the shore of our concepts.”

Dolphins are not waiting for us to understand them; while we study their language, they are actively teaching us the art of dialogue between species. Are we ready to accept the simple truth that no real language is ever “primitive”—it is simply different? And when the day of breakthrough comes—whether in 2030 with quantum analyzers of acoustic patterns or in 2045 with neural interfaces translating brain activity into alphanumeric code—we may discover that what we were searching for was not a translator of dolphin signals, but a textbook on thinking, written millions of years before humanity appeared.
It seems we’ve found a way to decode Einstein’s unknown equation.
Thank you!
