Perceptualized Date
“Humans possess more than one sense,” says Lutz Bornmann of the Max Planck Society in Germany, in his article The Sound of Science, “and there is no biological reason not to use the other ones.”
When he puts it that way, it sounds quite reasonable. Much of western culture is geared toward visual representation of data–in images and text–but humans also have a highly evolved and very sophisticated system of auditory processing. Could we not also represent data through sound?
The answer is yes–we already are.
Medical technology is replete with examples of auditory representation of data, with pitch, tone, and frequency conveying different information. Heart rate monitors and pulse oximeters come to mind. But medical applications of sonified data have the potential to go beyond monitoring to impact diagnosis, treatment, and self-treatment as well.
Data sonification has become a field in its own right, and there’s currently a lot of interest in sonification of other types of data as an alternative to visualization. And there have been some fascinating developments not only in medical technology, but also in astronomy, financial data processing, climate science, and other fields.
An Alternative to Visualisation
Visualizations of data link information to graphical elements. Data sonification uses non-speech audio to convey complex data patterns. Parameter mapping, the primary sonification method, maps data points to sonic elements such as pitch, loudness, tone, timbre, and duration. This schema delineates the relationship between the different data.
But it’s not just an interesting exercise. Studies have shown that we process auditory information faster than visual information. Sonification, or audification, also has the potential to level the playing field for visually impaired analysts. Furthermore, it opens the possibility for perceiving additional information that visual data presentation alone may miss.
Sonification of Trading Data
One unlikely-sounding area where data sonification is already making waves is in financial trading. In a field where fast processing, quick analysis, and immediate action are paramount, audification is a natural fit.
Numerous software systems such as Pure Data and SuperCollider (both open source) already exist that can use sophisticated algorithms and real time data processing to convert raw financial data numbers into meaningful auditory cues. These systems use various mapping techniques to transform information such as stock prices or market volatility to sound metrics like pitch, volume, and rhythm. Some systems also use machine learning models to predict patterns.
Proponents of these systems hold that audification of this data makes complex data sets more accessible. Granularity of the data allows for real-time updates as well as compressing data over longer periods of time, depending on traders’ needs. Data layers allow for deeper exploration of the data.
Representing Climate Data With Sound
Sonification of climate data has led to some fascinating collaborations between art and science, bringing data to life in new ways for the general public.
At the 2025 SXSW, for example, the sci-art outreach program System Sounds, in conjunction with Jet Propulsion Laboratories, presented a “Sonification Wall.” This interactive exhibit allowed users to interact with climate data such as ocean currents and temperature through sound and motion. This display mapped water temperature to pitch–higher temperatures were represented by higher pitch–and current speed to volume and intensity.
Examples of artistic interpretation of sonified climate data are many and varied, including cellist Daniel Crawford’s “A Song for Our Warming Planet,” which was created using sonified surface temperature data from NASA’s Goddard Institute of Space Studies; the work of pianist Treesong, and the compositions of Jamie Perera.
Making abstract data, such as climate data, accessible to non-scientists can not only make for a more informed public, but it can also foster an interest in science in future generations.
Data Sonification in Astronomy
Astronomical observatories already capture astronomical data using a variety of technologies, including infrared and X-ray telescopes. In 2020, NASA’s Chandra X-Ray Observatory and System Sounds combined forces to digitally sonify astronomical data. This project maps observational data from telescopes such as Chandra onto sound frequencies audible to people. The online exhibit “A Universe of Sound” contains a variety of sonifications of data captured by telescopes, from the Perseus Cluster to the Crab Nebula.
NASA has also sonified images from the Hubble Space Telescope. This project mapped image data taken by Hubble to sound elements, so that visitors to their website can “hear” a large variety of nebulae, galaxies, black holes, and other images. It’s entertaining, but, importantly, could provide a way of apprehending data that might have been missed through visual analysis alone.
When it comes to picking out important information from a crowded field of gathered data, the combination of visual and auditory data can be especially powerful.
In addition, taking in auditory data has many benefits for learners, whether or not they are fully sighted. Dr. Kimberley Arcand, a Visualization Scientist at the Chandra Observatory, performed a study of astronomical data sonification, using both blind and sighted study subjects. Both populations reported heightened enjoyment, learning, and a desire to learn more about the subject.
Sonification of Healthcare Data
Auditory displays have long been a staple of healthcare, from warning alarms to monitors, and beyond. But researchers are looking into using sonification of healthcare data to explore the data itself–as well as to forge new paths for diagnosis, monitoring, treatment, and self-treatment.
Researchers from the University of Victoria recently wrote about a number of uses of sonification in medicine, using parameter mapping. The applications included using sonified electrocardiogram data to diagnose and monitor cardiac pathologies, as well as sonifying the internal workings of artificial neural networks for melanoma diagnosis. Researchers also cited a prototype system that involves musical sonification to support biofeedback for movement rehabilitation.
Exploration is also underway into data sonification for patient usage. Two researchers from the University of Virginia looked into the representation of self-reported health data through music. The study found clear associations between perceived health and different musical features, and concluded that personalized music models are a valid way to present health data. They believe their findings can have applications in behavior management via personal devices.
At this point, future applications seem limitless. Combined with the data-crunching power of artificial intelligence, who knows how far this new method of data representation could take us?