Not made for each other- Audio-Visual Dissonance-based Deepfake Detection and Localization

118Citations
Citations of this article
109Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose detection of deepfake videos based on the dissimilarity between the audio and visual modalities, termed as the Modality Dissonance Score (MDS). We hypothesize that manipulation of either modality will lead to dis-harmony between the two modalities, e.g., loss of lip-sync, unnatural facial and lip movements, etc. MDS is computed as the mean aggregate of dissimilarity scores between audio and visual segments in a video. Discriminative features are learnt for the audio and visual channels in a chunk-wise manner, employing the cross-entropy loss for individual modalities, and a contrastive loss that models inter-modality similarity. Extensive experiments on the DFDC and DeepFake-TIMIT Datasets show that our approach outperforms the state-of-the-art by up to 7%. We also demonstrate temporal forgery localization, and show how our technique identifies the manipulated video segments.

Cite

CITATION STYLE

APA

Chugh, K., Gupta, P., Dhall, A., & Subramanian, R. (2020). Not made for each other- Audio-Visual Dissonance-based Deepfake Detection and Localization. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 439–447). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413700

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free