Efficient Temporally-Aware DeepFake Detection using H.264 Motion Vectors

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Video DeepFakes are fake media created with Deep Learning (DL) that manipulate a person's expression or identity. Most current DeepFake detection methods analyze each frame independently, ignoring inconsistencies and unnatural movements between frames. Some newer methods employ optical flow models to capture this temporal aspect, but they are computationally expensive. In contrast, we propose using the related but often ignored Motion Vectors (MVs) and Information Masks (IMs) from the H.264 video codec, to detect temporal inconsistencies in DeepFakes. Our experiments show that this approach is effective and has minimal computational costs, compared with per-frame RGB-only methods. This could lead to new, real-time temporally-aware DeepFake detection methods for video calls and streaming.

Cite

CITATION STYLE

APA

Grönquist, P., Ren, Y., He, Q., Verardo, A., & Süsstrunk, S. (2024). Efficient Temporally-Aware DeepFake Detection using H.264 Motion Vectors. In IS and T International Symposium on Electronic Imaging Science and Technology (Vol. 36). Society for Imaging Science and Technology. https://doi.org/10.2352/EI.2024.36.4.MWSF-335

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free