ReconVAT: A Semi-Supervised Automatic Music Transcription Framework for Low-Resource Real-World Data

18Citations
Citations of this article
29Readers
Mendeley users who have this article in their library.

Abstract

Most of the current supervised automatic music transcription (AMT) models lack the ability to generalize. This means that they have trouble transcribing real-world music recordings from diverse musical genres that are not presented in the labelled training data. In this paper, we propose a semi-supervised framework, ReconVAT, which solves this issue by leveraging the huge amount of available unlabelled music recordings. The proposed ReconVAT uses reconstruction loss and virtual adversarial training. When combined with existing U-net models for AMT, ReconVAT achieves competitive results on common benchmark datasets such as MAPS and MusicNet. For example, in the few-shot setting for the string part version of MusicNet, ReconVAT achieves F1-scores of 61.0% and 41.6% for the note-wise and note-with-offset-wise metrics respectively, which translates into an improvement of 22.2% and 62.5% compared to the supervised baseline model. Our proposed framework also demonstrates the potential of continual learning on new data, which could be useful in real-world applications whereby new data is constantly available.

Cite

CITATION STYLE

APA

Cheuk, K. W., Herremans, D., & Su, L. (2021). ReconVAT: A Semi-Supervised Automatic Music Transcription Framework for Low-Resource Real-World Data. In MM 2021 - Proceedings of the 29th ACM International Conference on Multimedia (pp. 3918–3926). Association for Computing Machinery, Inc. https://doi.org/10.1145/3474085.3475405

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free