Multi-adversarial variational autoencoder nets for simultaneous image generation and classification

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Discriminative deep-learning models are often reliant on copious labeled training data. By contrast, from relatively small corpora of training data, deep generative models can learn to generate realistic images approximating real-world distributions. In particular, the proper training of Generative Adversarial Networks (GANs) and Variational AutoEncoders (VAEs) enables them to perform semi-supervised image classification. Combining the power of these two models, we introduce Multi-Adversarial Variational autoEncoder Networks (MAVENs), a novel deep generative model that incorporates an ensemble of discriminators in a VAE-GAN network in order to perform simultaneous adversarial learning and variational inference. We apply MAVENs to the generation of synthetic images and propose a new distribution measure to quantify the quality of these images. Our experimental results with only 10% labeled training data from the computer vision and medical imaging domains demonstrate performance competitive to state-of-the-art semi-supervised models in simultaneous image generation and classification tasks.

Cite

CITATION STYLE

APA

Imran, A. A. Z., & Terzopoulos, D. (2021). Multi-adversarial variational autoencoder nets for simultaneous image generation and classification. In Advances in Intelligent Systems and Computing (Vol. 1232, pp. 249–271). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-6759-9_11

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free