Demonstration of performancenet: A convolutional neural network model for score-to-audio music generation

2Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.

Abstract

We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at https://github.com/bwang514/PerformanceNet.

Cite

CITATION STYLE

APA

Chen, Y. H., Wang, B., & Yang, Y. H. (2019). Demonstration of performancenet: A convolutional neural network model for score-to-audio music generation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 6506–6508). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/938

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free