Deep Generative Models for Image Generation: A Practical Comparison Between Variational Autoencoders and Generative Adversarial Networks

14Citations
Citations of this article
25Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Deep Learning models can achieve impressive performance in supervised learning but not for unsupervised one. In image generation problem for example, we have no concrete target vector. Generative models have been proven useful for solving this kind of issues. In this paper, we will compare two types of generative models: Generative Adversarial Networks (GANs) and Variational Autoencoders (VAEs). We apply those methods to different data sets to point out their differences and see their capabilities and limits as well. We find that, while VAEs are easier and faster to train, their results are in general more blurry than the images generated by GANs. These last are more realistic but noisy.

Cite

CITATION STYLE

APA

El-Kaddoury, M., Mahmoudi, A., & Himmi, M. M. (2019). Deep Generative Models for Image Generation: A Practical Comparison Between Variational Autoencoders and Generative Adversarial Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11557 LNCS, pp. 1–8). Springer Verlag. https://doi.org/10.1007/978-3-030-22885-9_1

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free