RL-Duet: Online music accompaniment generation using deep reinforcement learning

56Citations
Citations of this article
73Readers
Mendeley users who have this article in their library.

Abstract

This paper presents a deep reinforcement learning algorithm for online accompaniment generation, with potential for real-time interactive human-machine duet improvisation. Different from offline music generation and harmonization, online music accompaniment requires the algorithm to respond to human input and generate the machine counterpart in a sequential order. We cast this as a reinforcement learning problem, where the generation agent learns a policy to generate a musical note (action) based on previously generated context (state). The key of this algorithm is the well-functioning reward model. Instead of defining it using music composition rules, we learn this model from monophonic and polyphonic training data. This model considers the compatibility of the machine-generated note with both the machine-generated context and the human-generated context. Experiments show that this algorithm is able to respond to the human part and generate a melodic, harmonic and diverse machine part. Subjective evaluations on preferences show that the proposed algorithm generates music pieces of higher quality than the baseline method.

Cite

CITATION STYLE

APA

Jiang, N., Jin, S., Duan, Z., & Zhang, C. (2020). RL-Duet: Online music accompaniment generation using deep reinforcement learning. In AAAI 2020 - 34th AAAI Conference on Artificial Intelligence (pp. 710–718). AAAI press. https://doi.org/10.1609/aaai.v34i01.5413

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free