Abstract
Significant progress has been made by the advances in Generative Adversarial Networks (GANs) for image generation. However, there lacks enough understanding of how a realistic image is generated by the deep representations of GANs from a random vector. This chapter gives a summary of recent works on interpreting deep generative models. The methods are categorized into the supervised, the unsupervised, and the embedding-guided approaches. We will see how the human-understandable concepts that emerge in the learned representation can be identified and used for interactive image generation and editing.
Author supplied keywords
Cite
CITATION STYLE
Zhou, B. (2022). Interpreting Generative Adversarial Networks for Interactive Image Generation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13200 LNAI, pp. 167–175). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-04083-2_9
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.