Deep forgery discriminator via image degradation analysis

10Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Generative adversarial network-based deep generative model is widely applied in creating hyper-realistic face-swapping images and videos. However, its malicious use has posed a great threat to online contents, thus making detecting the authenticity of images and videos a tricky task. Most of the existing detection methods are only suitable for one type of forgery and only work for low-quality tampered images, restricting their applications. This paper concerns the construction of a novel discriminator with better comprehensive capabilities. Through analysis of the visual characteristics of manipulated images from the perspective of image quality, it is revealed that the synthesized face does have different degrees of quality degradation compared to the source content. Therefore, several kinds of image quality-related handicraft features are extracted, including texture, sharpness, frequency domain features, and deep features, to unveil the inconsistent information and modification traces in the fake faces. In this way, a 1065-dimensional vector of each image is obtained through multi-feature fusion, and it is then fed into RF to train a targeted binary classification detector. Extensive experiments have shown that the proposed scheme is superior to the previous methods in recognition accuracy on multiple manipulation databases including the Celeb-DF database with better visual quality.

Cite

CITATION STYLE

APA

Yu, M., Zhang, J., Li, S., Lei, J., Wang, F., & Zhou, H. (2021). Deep forgery discriminator via image degradation analysis. IET Image Processing, 15(11), 2478–2493. https://doi.org/10.1049/ipr2.12234

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free