Deep surrender: Musically controlled responsive video

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper we describe our responsive video performance, Deep Surrender, created using Cycling '74's Max/MSP and Jitter packages. Video parameters are manipulated in real-time, using chroma-keying and colour balance modification techniques to visualize the keyboard playing and vocal timbre of a live performer. We present the musical feature extraction process used to create a control system for the production, describe the mapping between audio and visual parameters, and discuss the artistic motivations behind the piece. © Springer-Verlag Berlin Heidelberg 2006.

Cite

CITATION STYLE

APA

Taylor, R., & Boulanger, P. (2006). Deep surrender: Musically controlled responsive video. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4073 LNCS, pp. 62–69). Springer Verlag. https://doi.org/10.1007/11795018_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free