What Makes Fake Images Detectable? Understanding Properties that Generalize

105Citations
Citations of this article
202Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The quality of image generation and manipulation is reaching impressive levels, making it increasingly difficult for a human to distinguish between what is real and what is fake. However, deep networks can still pick up on the subtle artifacts in these doctored images. We seek to understand what properties of fake images make them detectable and identify what generalizes across different model architectures, datasets, and variations in training. We use a patch-based classifier with limited receptive fields to visualize which regions of fake images are more easily detectable. We further show a technique to exaggerate these detectable properties and demonstrate that, even when the image generator is adversarially finetuned against a fake image classifier, it is still imperfect and leaves detectable artifacts in certain image patches. Code is available at https://github.com/chail/patch-forensics.

Cite

CITATION STYLE

APA

Chai, L., Bau, D., Lim, S. N., & Isola, P. (2020). What Makes Fake Images Detectable? Understanding Properties that Generalize. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12371 LNCS, pp. 103–120). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58574-7_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free