Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection

3Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Active speaker detection in videos addresses associating a source face, visible in the video frames, with the underlying speech in the audio modality. The two primary sources of information to derive such a speech-face relationship are i) visual activity and its interaction with the speech signal and ii) co-occurrences of speakers' identities across modalities in the form of face and speech. The two approaches have their limitations: the audio-visual activity models get confused with other frequently occurring vocal activities, such as laughing and chewing, while the speakers' identity-based methods are limited to videos having enough disambiguating information to establish a speech-face association. Since the two approaches are independent, we investigate their complementary nature in this work. We propose a novel unsupervised framework to guide the speakers' cross-modal identity association with the audio-visual activity for active speaker detection. Through experiments on entertainment media videos from two benchmark datasets-the AVA active speaker (movies) and Visual Person Clustering Dataset (TV shows)-we show that a simple late fusion of the two approaches enhances the active speaker detection performance.

Cite

CITATION STYLE

APA

Sharma, R., & Narayanan, S. (2023). Audio-Visual Activity Guided Cross-Modal Identity Association for Active Speaker Detection. IEEE Open Journal of Signal Processing, 4, 225–232. https://doi.org/10.1109/OJSP.2023.3267269

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free