Stumbling Upon Data Sonification When I Fused My Passion for Music with Coding

November 16, 2021

Like many graduate students from the MIDS program who are also full-time working professionals, I return to campus to seek knowledge and satisfy my intellectual curiosity in information and data science. It has become a part of a lifelong learning pursuit that enables me to constantly apply what I learn back into the real world. Along the way, I never forget that it is also important to have fun with science by combining new knowledge with my own passions in arts and music in whatever ways possible. For nearly a decade, I have been helping clients in their digital transformation journeys and creating delightful user experiences in the consumer business sector as a management consultant. I immerse myself in what I would call a “left brain meets right brain” environment where I often need to balance technicalities, aesthetics, functionalities, and accessibilities when delivering web/mobile solutions that are used by large user bases, sometimes in the millions. I have seen great things happen time and time again when bringing together disciplines that may not seem obviously connected. I therefore owe my desire to fuse arts (the right – creativity and intuition) and science (the left – analytical and logical) to several things: my current profession that urges me to be innovative by venturing into new arenas, my engineering upbringing, and my weekends learning how to play the violin as an adult. In this blog, I would like to share how my own experimentation of combining music with coding led to me exploring a research area called data sonification. 

What is data sonification you ask? To simply put, it is the practice of turning data into sounds in a rule-governed way. A formal definition drawn from The Sonification Handbook by Thomas Hermann et al. describes sonification as the transformation of data relations into perceived relations in an acoustic signal for the purposes of facilitating communication or interpretation [1]. While most are familiar with conventional data visualization techniques that present data using visual elements such as charts, graphs, maps, etc. to help us understand patterns, trends and outliers in data, data sonification leverages our auditory sense to present data in a whole new way. Analogous to the use of shapes, colors or lines to depict differences as in data visualization, timbre, volume, pitch, tempo, duration, etc. are used to achieve similar effects in data sonification.   

It may sound like a novel idea to some, however, data sonification is not a new concept. Sonification as a field was formalized by the International Community for Auditory Display (ICAD) back in 1992 [2]. Since then, scientists, musicians and artists have begun to popularize the use of sounds as an alternative or complementary method to represent data alongside visual representations. 

Let’s walk through some very simple data sonification examples from our daily lives. An alarm clock sounding at 6:00am, a whistle from a kettle on a stove is blown, announcement chimes in the train station, the audible beeps from a microwave, the list goes on. What’s common in the above examples is that they all convey some information. They tell us that it’s time to wake up, water is boiled, passengers need to pay attention, and food is ready respectively. You can also think of a film score as a form of sonification that conveys emotions from movie scenes expressively and artfully along with sound effects and dialogue. 

Data sonification is particularly effective in conveying temporal information. It also has the application of conveying information that cannot be easily visualized. It has been applied in many scientific fields such as astronomy, seismology, medicine and beyond. NASA, as an example, has produced a series of data sonification by turning astronomical images from the Chandra X-Ray Observatory and other telescopes into sound. Here is a clip of sounds from around the Milky Way. The Seismic Sound Lab at the Lamont Doherty Earth Observatory created a ground motion audio visualization that enabled viewers to experience the earthquakes that hit Haiti in 2010 and Tohoku in 2011 as if they were from within planet Earth. This was accomplished by converting energy waves into sounds and pairing them with visuals. The audiovisual show called SeismoDome was on display at the Hayden Planetarium at the American Museum of Natural History.

Most recently, data sonification has also been utilized in data journalism to illustrate current events through sounds. The BBC published an article that showcased the use of data sonification to turn the scale of Covid’s death toll across the globe into an audiovisual animation. Listen to how the pandemic looks like in each country here

An intriguing application of sonification is from Neil Harbisson. He is best known as a cyborg artist with an antenna implanted in his skull that sends audible vibrations to report to him information. Given he is completely colorblind, he demonstrated in a TED talk how his eyeborg device allowed him to hear colors by transposing the light frequencies of color hues into sound frequencies [3].  

For a more relevant application of data sonification to social science, Brian Foo has an impressive sonification of income inequality on the New York City subway. This soundtrack emulates a ride on the New York City subway’s 2 Train through 3 boroughs: Brooklyn, Manhattan, and the Bronx. At any given time, the quantity and the dynamics of the song’s instruments correspond to the median household income of that area [4]. Have a listen here

I hope by now I have built some excitement around what data sonification can do in vastly different realms. Now you must be wondering, what does coding have to do with all of these. I too had the same question earlier this year. In fact, I wasn’t even aware that data sonification is a research area until my further literature review like the above. It came after I completed my personal Python programming project as a part of the MIDS curriculum. It all began when I pondered, how neat would it be to make sounds with code while I was sharpening my Python skills. As I furthered my research on techniques and packages that would allow me to achieve the related, I was stoked to find that it is possible. In fact, there are vibrant communities of hobbyists and enthusiasts who devote their time and energy to do just that. In the techno music scene, live coding is a type of performance art in events such as algoraves (the term comes from algorithm and rave) in which the performer creates music by programming and reprogramming a synthesizer as the composition plays [5]. One of the most popular open-source software for live coding is the Ruby-based Sonic Pi created by Sam Aaron. This provides an intuitive frontend to the sound synthesis engine called SuperCollider that has been used for over two decades as the basis of many electronic music and acoustic research projects [6].

As for my personal project, I opted to create a Python building block that would allow me to play back sounds with a string input that is encoded in a user-defined format. This way, I have the maximum flexibility to turn any input strings of alphanumeric characters into sounds or melodies with a layer of data augmentation or processing of my choice. With some inspiration from how Nokia ringtones are annotated (a throwback from the 90s), I was able to accomplish turning any given input strings of alphanumeric characters into melodic sounds by mapping them into octaves, frequencies and durations. The mapped annotation looks something like the below for Ode to Joy: 

4E1-4E1-4F#1-4G1-4G1-4F1-4E1-4D1-4C1-4C1-4D1-4E1-4E1-4D1-4D2

In the above, each note consists of 3 alphanumeric characters that denote the octave, the musical note and the duration. They are separated by dashes in the above string. The first note for example maps to the E note in octave 4 with a frequency of 326.6 Hz for a duration of 1 unit.  While this annotation is only one example how a string could be mapped to frequencies and other sound of choice, there are limitless variations of the mapping and augmentation that could be performed to an input.   

I look forward to combining other tools with the concept that I have established thus far and apply it on different types of inputs to create sounds that we have never heard before with coding. The possibilities are endless. I hope you have learned something about data sonification today. 

References

[1] Hermann Thomas, Hunt Andy and John G. Neuhoff, "The Sonification Handbook", Logos Verlag Berlin GmbH, pp. 29-52, 2011, ISBN 978-3-8325-2819-5

[2] C. Beans, “Science and Culture: Musicians join scientists to explore data through sound,” Proc. Natl. Acad. Sci., vol. 114, no. 18, pp. 4563 LP – 4565, May 2017, doi: 10.1073/pnas.1705325114.

[3] Walters, H. (2013, July 11). The sound of color: Neil Harbisson's talk visualized. [online] ideas.ted.com. Available at: https://ideas.ted.com/the-sound-of-color-neil-harbissons-talk-visualized/

[4] Foo, B. (n.d.). Data-driven DJ. An introduction to Data-Driven DJ. Available at: https://datadrivendj.com/tracks/subway/

[5] Stroehle, N. (2019, July 31). Algoraves put live performance into programming - SXSW Magazine. SXSW. Available at: https://www.sxsw.com/world/2019/algoraves-put-live-performance-into-programming/

[6] S. Cass, "Illuminating musical code: Program an electronic music performance in real time - [Resources_Hands On]," in IEEE Spectrum, vol. 56, no. 09, pp. 14-15, Sept. 2019, doi: 10.1109/MSPEC.2019.8818581.