Abstract
In training a deep learning system to perform audio transcription, two practical problems may arise. Firstly, most datasets are weakly labelled, having only a list of events present in each recording without any temporal information for training. Secondly, deep neural networks need a very large amount of labelled training data to achieve good quality performance, yet in practice it is difficult to collect enough samples for most classes of interest. In this paper, we propose factorising the final task of audio transcription into multiple intermediate tasks in order to improve the training performance when dealing with this kind of low-resource datasets. We evaluate three data-efficient approaches of training a stacked convolutional and recurrent neural network for the intermediate tasks. Our results show that different methods of training have different advantages and disadvantages.
Author supplied keywords
Cite
CITATION STYLE
Morfi, V., & Dan Stowell. (2018). Deep learning for audio event detection and tagging on low-resource datasets. Applied Sciences (Switzerland), 8(8). https://doi.org/10.3390/app8081397
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.