Large scale audiovisual learning of sounds with weakly labeled data

14Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

Abstract

Recognizing sounds is a key aspect of computational audio scene analysis and machine perception. In this paper, we advocate that sound recognition is inherently a multi-modal audiovisual task in that it is easier to differentiate sounds using both the audio and visual modalities as opposed to one or the other. We present an audiovisual fusion model that learns to recognize sounds from weakly labeled video recordings. The proposed fusion model utilizes an attention mechanism to dynamically combine the outputs of the individual audio and visual models. Experiments on the large scale sound events dataset, AudioSet, demonstrate the efficacy of the proposed model, which outperforms the single-modal models, and state-of-the-art fusion and multi-modal models. We achieve a mean Average Precision (mAP) of 46.16 on Audioset, outperforming prior state of the art by approximately +4.35 mAP (relative: 10.4%).

Cite

CITATION STYLE

APA

Fayek, H. M., & Kumar, A. (2020). Large scale audiovisual learning of sounds with weakly labeled data. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2021-January, pp. 558–565). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2020/78

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free