Enhanced Unsupervised Image Generation using GAN based Convolutional Nets

  • et al.
N/ACitations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Generative Adversarial Networks (GANs) use deep learning methods like neural nets for generative modeling. Neural style transferring of images and facial character generation of anime images are previously implemented by applying GAN methods but were not successful in giving a promising output. In this work, Image Processing is applied on the datasets in the mode along with the training of GAN system. The problem of applying GAN to generate specific images is addressed by using a clean and problem specific dataset for anime facial character generation. Modeling is done by applying Convolutional Neural Nets, GANs empirically. Neural style transfer, Automatic Anime characters are generated with high-resolution, and this model tackles the limitations by progressively increasing the resolution of both generated images and structural conditions during training. This model can be used to develop unique anime characters or the image generated can be used as inspiration by artists and graphic designers, can be used as filters in famous apps such as snapchat for style transferring. With different evaluations and result analysis, it is observed that this model is a stable and high-quality model.

Cite

CITATION STYLE

APA

Bai, D. M. R., Sreedevi, Mrs. J., & Pragna, Ms. B. (2020). Enhanced Unsupervised Image Generation using GAN based Convolutional Nets. International Journal of Recent Technology and Engineering (IJRTE), 8(6), 5312–5316. https://doi.org/10.35940/ijrte.f9856.038620

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free