Pre-Training Audio Representations with Self-Supervision

39Citations
Citations of this article
38Readers
Mendeley users who have this article in their library.

Abstract

We explore self-supervision as a way to learn general purpose audio representations. Specifically, we propose two self-supervised tasks: Audio2Vec, which aims at reconstructing a spectrogram slice from past and future slices and TemporalGap, which estimates the distance between two short audio segments extracted at random from the same audio clip. We evaluate how the representations learned via self-supervision transfer to different downstream tasks, either training a task-specific linear classifier on top of the pretrained embeddings, or fine-tuning a model end-to-end for each downstream task. Our results show that the representations learned with Audio2Vec transfer better than those learned by fully-supervised training on Audioset. In addition, by fine-tuning Audio2Vec representations it is possible to outperform fully-supervised models trained from scratch on each task, when limited data is available, thus improving label efficiency.

Cite

CITATION STYLE

APA

Tagliasacchi, M., Gfeller, B., Quitry, F. D. C., & Roblek, D. (2020). Pre-Training Audio Representations with Self-Supervision. IEEE Signal Processing Letters, 27, 600–604. https://doi.org/10.1109/LSP.2020.2985586

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free