In this paper we describe our responsive video performance, Deep Surrender, created using Cycling '74's Max/MSP and Jitter packages. Video parameters are manipulated in real-time, using chroma-keying and colour balance modification techniques to visualize the keyboard playing and vocal timbre of a live performer. We present the musical feature extraction process used to create a control system for the production, describe the mapping between audio and visual parameters, and discuss the artistic motivations behind the piece. © Springer-Verlag Berlin Heidelberg 2006.
CITATION STYLE
Taylor, R., & Boulanger, P. (2006). Deep surrender: Musically controlled responsive video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4073 LNCS, pp. 62–69). Springer Verlag. https://doi.org/10.1007/11795018_6
Mendeley helps you to discover research relevant for your work.