Passive Video Forgery Detection Considering Spatio-Temporal Consistency

2Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper proposes a method for detecting forged objects in videos that include dynamic scenes such as dynamic background or non-stationary scenes. In order to adapt to dynamic scenes, we combine Convolutional Neural Network and Recurrent Neural Network. This enables us to consider spatio-temporal consistency of videos. We also construct new video forgery databases for object modification as well as object removal. Our proposed method using Convolutional Long Short-Term Memory achieved Area-Under-Curve (AUC) 0.977 and Equal-Error-Rate (EER) 0.061 on the object addition database. We also achieved AUC 0.872 and EER 0.219 on the object modification database.

Cite

CITATION STYLE

APA

Kono, K., Yoshida, T., Ohshiro, S., & Babaguchi, N. (2020). Passive Video Forgery Detection Considering Spatio-Temporal Consistency. In Advances in Intelligent Systems and Computing (Vol. 942, pp. 381–391). Springer Verlag. https://doi.org/10.1007/978-3-030-17065-3_38

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free