Luke Dubois, professor. Chatting on Monday about the project.
Jean-Luc Cohen, my music tech professor of Sound Synthesis. I'm going to speak with him at more length next week regarding generational musical algorithms that could receive my field recordings as an input and then output or overlay the source material with a bit of musical Csound code. The way Eno has worked in the past, creating a palette that gets generated differently but within the same set of rules each time it runs. He recommended I look into R Murray Schafer The Tuning of The World (purchased) as well as the Csound Book's chapters on generative musical algorithms.
Mark Cartwright, post doc researcher working at Steinhardt Music Technology program on SONYC noise pollution project...
SONYC is probably interested in building out some API’s down the road for people to play with their noise pollution machine listening data
DEP - Department of Environmental Protection, an NYC agency
Auditory Salience - hard to measure, unlike with Visual Salience because you can use eye tracking to understand what people are looking at. Hard to tell what people are listening to within a sound field, and difficult to tell what may be annoying/disturbing/stimulating, etc to them
www.ibm.com/watson/services/speech-to-text/ (speech-to-text-demo.mybluemix.net/) ... there are a lot of automatic speech-to-text APIs and services...
Peter Ablinger - voice to piano dude
Marina Zurkow - NewTown Creek piece
Rebecca Solnit - writer
Janet Cardiff - sound walk artist, nytimes piece
Holladay brothers - site specific music apps
Chris Watson - field recordings and sonic collage
Pejk Malinowski - audio narrative work
Stephen Vitiello - Bells piece on the Highline
Zach Layton - Issue Project Room
Ryoji Ikeda - visual, site specific electronic music installations
Andy Goldsworthy - site specific land art