Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders

14Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In many contexts, creating mappings for gestural interactions can form part of an artistic process. Creators seeking a mapping that is expressive, novel, and affords them a sense of authorship may not know how to program it up in a signal processing patch. Tools like Wekinator [1] and MIMIC [2] allow creators to use supervised machine learning to learn mappings from example input/output pairings. However, a creator may know a good mapping when they encounter it yet start with little sense of what the inputs or outputs should be. We call this an open-ended mapping process. Addressing this need, we introduce the latent mapping, which leverages the latent space of an unsupervised machine learning algorithm such as a Variational Autoencoder trained on a corpus of unlabelled gestural data from the creator. We illustrate it with Sonified Body, a system mapping full-body movement to sound which we explore in a residency with three dancers.

Cite

CITATION STYLE

APA

Murray-Browne, T., & Tigas, P. (2021). Latent Mappings: Generating Open-Ended Expressive Mappings Using Variational Autoencoders. In Proceedings of the International Conference on New Interfaces for Musical Expression. International Conference on New Interfaces for Musical Expression. https://doi.org/10.21428/92fbeb44.9d4bcd4b

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free