For my final project I'd like to return to uncanny valley aesthetics of the Lyrebird voice synthesizer and combine that with a Max project I made previously in Live Image Processing and Performance.
I also see this as investigation in my Thesis project's area of investigation.
Here's the general scope and UX of the project.
A user plays my xylophone.
The different notes are heard and discerned by a microphone running into Max.
The output is two-fold: 1) a projection mapped to the bars of the xylophone which are allowed to pass video when their corresponding notes ring out above a certain volume threshold, and 2) additional sound assets are triggered from Max with each detected hit.
The audio are lyrebird samples of my voice but trained to different notes that are in harmony with their corresponding xylophone trigger notes. Ideally, these are Markov Chained notes, so that each struck note of the xylophone produces one of three or four possible notes in harmony.
The video imagery is shots of me saying the each word of the script and looking into the camera (and thus the user's eyes).
Here are examples of the work I've made that serve as foundations for this piece.
Xylophone + video projection:
Lyrebird Voice synthesizer with musical constraints: