Learning as performance: Autoencoding and generating dance movements in real time

5Citations
Citations of this article
9Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper describes the technology behind a performance where human dancers interact with an “artificial” performer projected on a screen. The system learns movement patterns from the human dancers in real time. It can also generate novel movement sequences that go beyond what it has been taught, thereby serving as a source of inspiration for the human dancers, challenging their habits and normal boundaries and enabling a mutual exchange of movement ideas. It is central to the performance concept that the system’s learning process is perceivable for the audience. To this end, an autoencoder neural network is trained in real time with motion data captured live on stage. As training proceeds, a “pose map” emerges that the system explores in a kind of improvisational state. The paper shows how this method is applied in the performance, and shares observations and lessons made in the process.

Cite

CITATION STYLE

APA

Berman, A., & James, V. (2018). Learning as performance: Autoencoding and generating dance movements in real time. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10783 LNCS, pp. 256–266). Springer Verlag. https://doi.org/10.1007/978-3-319-77583-8_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free