Sonic Plasticity, an introduction

  The Honk-Tweet studio - Newark, New Jersey

The Honk-Tweet studio - Newark, New Jersey

 
 

Sonic Plasticity proposes the use of sound as a malleable material — one that can be stretched in all dimensions, encompassing height, width, and depth, with curves, edges, and changing geometries. The works produced within this framework serve as the context in which a person is encouraged to (a) contemplate the act of listening as a phenomenon; (b) be actively present in the moment, observing the inner cognitive and emotional challenges involved in receiving the new art form in detail; and ultimately (c) overcome such challenges, extending the person’s ability to sense the world, thus widening the sensing spectrum that encompasses the human experience.

Sound is the result of dynamic actions, periodic vibrations, sudden impacts, or oscillating resonances. Everyday sounds like clapping, dragging a chair, or a dog bark produce a pressure wave that propagates from the source, encountering multiple objects, all made of different materials. At each collision point, the particles forming the pressure wave bounce and change directions forming new, reflective waves. These interactions get encoded into the final waves that make our eardrums resonate, which contain information not only about the source — the event that originated the wave — but also about the space surrounding it.

Our mind is constantly scanning for sources of sound around us. The simple act of listening allows us to estimate where sound-emitting sources are in relation to the listener, as well as whether they are moving, which direction they are moving in, and if they represent a possible threat. Our ability to locate a sound’s source through listening has evolved and developed over thousands of years and has been fundamental to our survival.

Sound localization is very useful to our daily lives but is rarely considered when creating artistic experiences. Audio engineers use studio processes like panning and reverb to recreate the placement of musicians during a performance by recording their individual sonic contributions and placing them within the stereo field formed by two speakers. When this is done correctly, it can make the listener feel closer to the music they are experiencing, almost as if they are in the studio.

But once a sound has been recorded, it becomes disassociated from the source that produced it. This means that the dynamic event that initiated the sound is no longer responsible for creating the pressure wave described above. Instead, an electronic setup is needed to reproduce and amplify the recorded material. Our ability to identify the physical location of a sound produced through electronic means depends on the dimensions, surfaces, objects, materials, and environmental noise level of a listening space as well as speaker placement, panning technique, how sensitive and trained our ears are to decode sonic spatial cues, and how willing we are to take in the experience with an open mind. The frequency balance of the sound being amplified also plays an important part in our ability to locate it. These factors interact with each other in very intricate ways, and fully describing such interactions requires in-depth interdisciplinary research.

To illustrate the complexity of the interaction among all of these factors simply, let’s consider a minimal setup: an audio player connected to a single speaker in a quiet, midsize, neutral-sounding space, such as a professional sound studio, a low-reverb college classroom, or a serene outdoor space. Imagine a sound file is reproduced through such a setup. In this scenario we experience a match between where we see the speaker standing and where we perceive the sound to be coming from, and therefore we correctly attribute the source of sound to the speaker. Some may think of this as an obvious assumption, but the relationship between the location of a speaker and our spatial perception of a sound’s location can quickly become counterintuitive.

Consider a different scenario: In the same space described above, an audio player is directly connected to two speakers. The same audio signal is sent to both speakers with the same volume level. Imagine that a listener is standing in front of the speakers, separated from them at a similar distance to that between the speakers, forming a triangle (see figure below). In this new scenario, a person would see two possible sound sources but hear one virtual source between the two speakers, as if the sound is coming from a place where there is no speaker!

Continue reading