Deep music: Towards musical dialogue

13Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Computer dialogue systems are designed with the intention of supporting meaningful interactions with humans. Common modes of communication include speech, text, and physical gestures. In this work we explore a communication paradigm in which the input and output channels consist of music. Specifically, we examine the musical interaction scenario of call and response. We present a system that utilizes a deep autoencoder to learn semantic embeddings of musical input. The system learns to transform these embeddings in a manner such that reconstructing from these transformation vectors produces appropriate musical responses. In order to generate a response the system employs a combination of generation and unit selection. Selection is based on a nearest neighbor search within the embedding space and for real-time application the search space is pruned using vector quantization. The live demo consists of a person playing a midi keyboard and the computer generating a response that is played through a loudspeaker.

Cite

CITATION STYLE

APA

Bretan, M., Oore, S., Engel, J., Eck, D., & Heck, L. (2017). Deep music: Towards musical dialogue. In 31st AAAI Conference on Artificial Intelligence, AAAI 2017 (pp. 5081–5082). AAAI press. https://doi.org/10.1609/aaai.v31i1.10544

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free