Grand Central Immersion: Thesis Statement

How can I leverage the immersive psychoacoustic properties of 3D audio to build a rich and interactive storytelling experience via a location-specific, psychogeographic audio guide of NYC's iconic Grand Central Station?

Description:

When you put someone in the center of a story they take more of an interest. By exploring spatialized audio cues and immersive audio design as tools to modify behavior within physical space, my Grand Central story will lead users through a delightful and interactive examination of the urban monument that reveals personal stories, historic moments, and psychogeographic critique.

Even audio guides that feature immersive sound are often limited by their static nature. And while sound is the most inherently immersive of our sensory experiences, overwhelmingly immersive tech focuses on the visual display. While the fidelity of a VR headset experience is high, it lacks freedom of movement and is detached from physical reality. 

Sound is popularly seen as a companion to more prioritized visual displays, often sound is an afterthought. Yet spatial audio cues powerfully alter our behavior inside VR headsets to help us wayfind and discover - so why not ditch the HMD and spend longer in a more pleasant mixed reality?

This piece will interest fans of spatial sound design, audio focused podcasts like Radiolab, NYC tourists or history buffs, and those more generally interested in exploring the perimeter of what augmented reality experiences can be. 

My mixed-up reality experience will be a headphones-based experience developed in Unity for iPhone, and will allow users to spend much longer inside it than a visually oriented experience would. It will also be more distributable than an immersive theater piece could be, and will explore the UX tools of mixed reality audio display in a location-aware manner that will be new to most users. 

The key features will be a base layer of environmental binaural recordings capture in Grand Central station and overlaid monaural audio assets placed virtually into the sound field that users can move towards and away from. Real time audio processing will realistically localize sonic moments and allow users to modify the depth and angle of sounds by walking around. 

Historical reenactment will be one poetic component to the experience, as for example when Mary Lee Read, the organist for Grand Central between 1928-1958, played "The Star Spangled Banner" the day after the Pearl Harbor attacks (instead of her normal Christmas carols) and brought the hall full of commuters to a somber, dramatic stand still. 

I expect the experience to have a 10-15 minute arc, and will be designed for one user. Poetic prompts from the narrator will encourage playful voyeurism and offer historical/architectural exploration, and perhaps culminate in an interaction with an unwitting stranger. I also hope to include a Voice UI component, that allows the experience to remain entirely in the sonic realm and detached from the visual display after initiating the experience.

Research Approach:

 

There is a magic trick that already exists: listening to prerecorded binaural sound in headphones within the space in which the audio was captured. It is at once naturalistic and surreal, delightfully obscuring the distinction between past and present and challenging our brains to encode/decode the psychoacoustic aural cues that we hear with what our eyes perceive. In this moment of blended reality there is a storytelling opportunity that combines a podcast-style narrative with a guided historical walking tour.

Much of the research into soundscapes, acoustic ecology, and psychogeography has been conducted already in Marina Zurkow's Temporary Expert class. Magic Windows with Rui Perreira and Project Dev Studio with Danny Rozin familiarized me with Unity and the different SDK's and map API's that can serve my project. I'm also currently in Dr. Roginska's 3D Audio class at the Music Technology program, which will be a major influence on my thesis work.

Personal Statement:

 

Let me site some things that I love which drew me to this area of focus. Principally, Janet Cardiff's Central Park audio walk "Her Long Black Hair" and Alter Bahnhof video walk, the binaural audio one-man show on Broadway "The Encounter," derivés and the work of the psychogeographers (i.e., Guy Deboord's writing, Patrick Keiller's films), and immersive podcasts such as Radio Lab.

For the past 10 years I've dedicated my work to one thing - creating amazing experiences with music. As my performances have evolved to include increasingly complex technical choreography, I've become more interested in adding visual layers to the sounds of the musicians. Mostly this has focused on editing video content to project behind the performers, adding metalayers of meaning to the work. 

At ITP my projects have often positioned the user as coauthor within a malleable soundscape, and this thesis project aspires to leap from those mediums of screen and visual display into the realm of the invisible and naturalistic. 

Some experts I've consulted with already:

Marina Zurkow, ITP - Temporary Expert teacher from last semester and sound walk author.

Luke Dubois, IDM - currently doing an Independent Study with him, will assist on the musical and location specific considerations.

Dr. Agnieska Roginska, NYU Music Technology - currently taking her 3D Audio course and will consult directly via the considerations related to this critical element of my project.

Charlie Mydhal and Mark Cartwright, post-Doc researchers at NYU CUSP's SONYC machine learning noise pollution project - in conversation with these gentleman

Rui Pereirra, ITP - Took his Magic Windows and Mixed Up Realities last semester, and am in conversation with him about the best ways to implement location aware Unity applications in my project.

TK Broderick, ITP - took his Immersive Listening course and continue to converse with him about spatial audio implementation and Unity.

Jean-Luc Cohen, NYU Music Technology - took his Software Synthesis last semester, and will continue to meet with him regarding digital signal processing and procedural audio within Unity.