In ‘Humanities Approaches to Graphical Display’, Drucker famously says that ‘data’ are not things given, but things captured and transformed. Visualization is a creative act. We are far too invested in ‘seeing’. But researchers have recently found out that humans are better equipped to make sense of data patterns audibly than graphically in two dimensions. Moving huge datasets from computers and sonifying them would be far more understandable. Because when you hear music, nuances like certain bad notes stand out, which help in spotting anomalies.
Oxford dictionary defines sonification as – ‘The use of non-speech sound to convey quantifiable information or represent data, typically as the output from an electronic device; the conversion of data into sound for this purpose.’ For a technique to be called sonification, it has to meet certain conditions like reproducibility and intelligibility. Reproducibility is when different researchers get same results using the same datasets and the same methods of transformation. Intelligibility is when the original data’s objective elements are systematically reflected in the resulting sound.
Rather than a visual presentation, sonification will help achieve a more intuitive understanding of the scientific data. Technical data like stock price fluctuations, seismographic data and the likes can be listened to.
Testing the Theory
Enhancing overall comprehension using sonification is what is being tested by a team involved in a project. The team built a 129-loudspeaker array which has been installed in Virginia Tech’s space lab in a giant immersive cube. Containing one of the biggest multichannel audio systems in the world, this cube processes sound in a special 360-degree three-dimensional format.
The test subjects being used in this lab are earth’s upper atmosphere datasets with each portion of data being converted to a unique sound. Variations in the audio pieces are introduced by adjusting pitch, volume and amplitude accordingly. The Cube’s 129 speakers represent each section of the globe’s atmosphere. The arrangement is such that, similar to a hemisphere, audio is projected in a half dome like pattern. The team wanders the cube while simultaneously operating on an interface which lets them review the audio files by rewinding, slowing down and so on. This interface, in turn, captures the study user data which is later analyzed.
According to an article, Virginia Tech’s team mentions, “Users experience spatial sound, which means they can hear everything around them. Sounds are actually placed in specific spots in the room.” In an article on the university’s website, Ivica Ico Bukvic – professor of music composition and multimedia in the Virginia Tech’s College of Liberal Arts and Human Sciences, states that it perfectly makes sense to go beyond the general 2-D graphical information models and invent methods for other creative visualization forms.
Other Works in This Field
An astrophysicist at the South African Astronomical Observatory in Cape Town, Wanda Diaz Merced lost her vision due to which she had to discontinue her work. She collaborated with programmers to find a way to listen to data, which led her to make a discovery about high energy stellar explosions. She found the presence of electromagnetic waves produced during energy exchange between particles. No one until then had discovered this fact just by looking at graphs.
‘Microbial Bebop’, an invention of Peter Larsen – a biologist, converts different locations’’ data of microbial diversity to dramatically changing music. It blooms to represent a microbial abundance of any particular species.
Humans respond to music and young children should be encouraged to think of science not only just something that is seen but also something that can be heard. Because much like our sight, hearing is also primed to pick up minute changes.