Emotions Don't Lie: An Audio-Visual Deepfake Detection Method using Affective Cues

176Citations
Citations of this article
215Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We present a learning-based method for detecting real and fake deepfake multimedia content. To maximize information for learning, we extract and analyze the similarity between the two audio and visual modalities from within the same video. Additionally, we extract and compare affective cues corresponding to perceived emotion from the two modalities within a video to infer whether the input video is "real"or "fake". We propose a deep learning network, inspired by the Siamese network architecture and the triplet loss. To validate our model, we report the AUC metric on two large-scale deepfake detection datasets, DeepFake-TIMIT Dataset and DFDC. We compare our approach with several SOTA deepfake detection methods and report per-video AUC of 84.4% on the DFDC and 96.6% on the DF-TIMIT datasets, respectively. To the best of our knowledge, ours is the first approach that simultaneously exploits audio and video modalities and also perceived emotions from the two modalities for deepfake detection.

Cite

CITATION STYLE

APA

Mittal, T., Bhattacharya, U., Chandra, R., Bera, A., & Manocha, D. (2020). Emotions Don’t Lie: An Audio-Visual Deepfake Detection Method using Affective Cues. In MM 2020 - Proceedings of the 28th ACM International Conference on Multimedia (pp. 2823–2832). Association for Computing Machinery, Inc. https://doi.org/10.1145/3394171.3413570

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free