Residual Recurrent CRNN for End-to-End Optical Music Recognition on Monophonic Scores

9Citations
Citations of this article
17Readers
Mendeley users who have this article in their library.
Get full text

Abstract

One of the challenges of the Optical Music Recognition task is to transcript the symbols of the camera-captured images into digital music notations. Previous end-to-end model which was developed as a Convolutional Recurrent Neural Network does not explore sufficient contextual information from full scales and there is still a large room for improvement. We propose an innovative framework that combines a block of Residual Recurrent Convolutional Neural Network with a recurrent Encoder-Decoder network to map a sequence of monophonic music symbols corresponding to the notations present in the image. The Residual Recurrent Convolutional block can improve the ability of the model to enrich the context information. The experiment results are benchmarked against a publicly available dataset called CAMERA-PRIMUS, which demonstrates that our approach surpass the state-of-the-art end-to-end method using Convolutional Recurrent Neural Network.

Cite

CITATION STYLE

APA

Liu, A., Zhang, L., Mei, Y., Han, B., Cai, Z., Zhu, Z., & Xiao, J. (2021). Residual Recurrent CRNN for End-to-End Optical Music Recognition on Monophonic Scores. In MMPT 2021 - Proceedings of the 2021 Workshop on Multi-Modal Pre-Training for Multimedia Understanding (pp. 23–27). Association for Computing Machinery, Inc. https://doi.org/10.1145/3463945.3469056

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free