Collection No 5
For all the familiar benefits of visualizing data, there are limitations to relying on a single sense to gain understanding.
Humans are built to process complexity. Indeed, we are faced with unprecedented levels of complexity practically every moment of every day. While this flow can be potentially overwhelming, humans are evolved to make sense of the deluge. Our capacity to process complexity is the result of the unified sum of our senses; each sense, working in harmony, pulls in loads of information that our brain pieces together to help us interpret the endlessly intricate narrative unfolding around us. Now, to this already busy flow, new types of digital sensing are entering the mix. But while the data we gather through non-human sensing methods, such as digital sensors, is just as complex as the data our body processes, we typically interpret it through a single sense: vision.
In the world of data representation, visualization rules. We see the occasional sonic rendering of a data set, but visualization dominates with good reason. The tools we have evolved to simplify and render complex data sets visually are highly effective in helping us to identify trends more easily. Yet for all the familiar benefits of visualizing data, there are limitations to relying on a single sense to gain understanding. For example, while our brains are wired for high capacity intake and processing of imagery, that information is cached in a memory bank that decays quickly. Moment to moment, this part of our visual memory is constantly being overwritten with new inputs. Contrast this with the way we process and remember smells. Our sense of smell is processed and stored in sync with our long term memory. This is why it’s sometimes said that we never forget a smell, and why many of our oldest memories are attached to scents. This diversity of sensory experience raises the question: why not tap into secondary senses to present data more effectively?
Adding sound, smell, taste, or touch to sight would expand the intensity of data experience, and likely create more nuance, with more impact than any single mode of representation. Just as sight gives us color, shape, size, brightness, and space to work with, our other senses also offer an array of variables with which we might represent varied aspects of data. With sound, there is pitch, tone, volume, frequency, and rhythm. With touch there is texture, weight, pressure, temperature, and materiality. Our senses of smell and taste are closely linked but we can still use flavor and scent both independently as well as together. The following examples offer a glimpse of the emerging potential of multi-sensory data representation.
Scientists in Alaska are recording the sound of volcanoes prior to eruption. Typically volcanic activity is monitored through the subtle physical tremors of day-to-day seismic activity. By listening, scientists have found that there is a distinct sonic pattern that precedes a volcanic eruption. The signals have a tea-kettle like scream that happens after a rhythmic, drum-like build up. While the researchers are not entirely sure where the sounds originate, the acceleration and deceleration of the rhythm helps identify different activities leading up to the eruption.
Using geolocation data, Brian House created a vinyl music album to audibly represent all of the places he visited over a single year. Each location within a city is represented by a note in the musical scale, while each city is rendered by a related musical key. When putting this record on the turntable and starting it, every revolution of the record recounts the locations visited that day, working as a 24 hour clock. As the record spins and the sounds play, patterns of behavior and movement begin to be revealed. Work days, weekends, vacations, and holidays can all be distinguished from each other. By disassociating the locations from a map, we begin to hear patterns that may have been obscured by traditional visual representation.
GhostFood is a project by artists Miriam Simun and Miriam Songster consisting of a wearable device able to emit familiar, food-related scents, complemented by an odorless, “edible textural analogue” to simulate the eating experience. By recreating the experiences of smell and chewing, a person’s mind creates the perception of flavor, even in the absence of food. Applying the mutually affected nature of taste, texture, and smell by mapping each sense to a distinct data point would generate a powerful multi-modal data experience unreachable by any other combination of senses.
Artist Amy Radcliffe is exploring the relationship of emotion and smell with the Scent-ography device, an analog system designed to capture and reproduce odors. While this is a speculative device, the principles illustrated point to the power of capturing and replicating smells. Given how tightly our emotions are linked to our olfactory senses, capturing and replicating the scent of objects and places could be enormously potent in creating impactful data experiences. As we move into the extra-visual era of data representation, it is important to remember that the goal is not simply to find the best alternative or complement to visualization. Rather, the ideal is to experience the data more richly. This means that anyone can take a data set and begin to map the parameters to different sensory modes, exploring the data and uncovering new insights. Experiencing data is what humans are evolved to do. Yet, in terms of our ability to understand and use data in meaningful ways, we have only scratched the surface. Moving past visual representation offers new opportunities to discover and communicate insights from data.