An Unsupervised Way to Understand Artifact Generating Internal Units in Generative Neural Networks

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

Abstract

Despite significant improvements on the image generation performance of Generative Adversarial Networks (GANs), generations with low visual fidelity still have been observed. As widely used metrics for GANs focus more on the overall performance of the model, evaluation on the quality of individual generations or detection of defective generations is challenging. While recent studies try to detect featuremap units that cause artifacts and evaluate individual samples, these approaches require additional resources such as external networks or a number of training data to approximate the real data manifold. In this work, we propose the concept of local activation, and devise a metric on the local activation to detect artifact generations without additional supervision. We empirically verify that our approach can detect and correct artifact generations from GANs with various datasets. Finally, we discuss a geometrical analysis to partially reveal the relation between the proposed concept and low visual fidelity.

Cite

CITATION STYLE

APA

Jeong, H., Han, J., & Choi, J. (2022). An Unsupervised Way to Understand Artifact Generating Internal Units in Generative Neural Networks. In Proceedings of the 36th AAAI Conference on Artificial Intelligence, AAAI 2022 (Vol. 36, pp. 879–887). Association for the Advancement of Artificial Intelligence. https://doi.org/10.1609/aaai.v36i1.19989

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free