We present in this paper PerformacnceNet, a neural network model we proposed recently to achieve score-to-audio music generation. The model learns to convert a music piece from the symbolic domain to the audio domain, assigning performance-level attributes such as changes in velocity automatically to the music and then synthesizing the audio. The model is therefore not just a neural audio synthesizer, but an AI performer that learns to interpret a musical score in its own way. The code and sample outputs of the model can be found online at https://github.com/bwang514/PerformanceNet.
CITATION STYLE
Chen, Y. H., Wang, B., & Yang, Y. H. (2019). Demonstration of performancenet: A convolutional neural network model for score-to-audio music generation. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 6506–6508). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/938
Mendeley helps you to discover research relevant for your work.