Improved artistic images generation using transfer learning

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.

Abstract

The existing methods for photographic image generation have defects in both content preservation and style transform, which suppress the accuracy of the generated images. This paper attempts to improve the quality of photographic images generated based on inputted content images and style images. The transfer learning with VGG-19 model, a convolutional neural network (CNN), was adopted to extract the features from inputted style image and apply them to the content image. Then, a loss function was defined based on the ImageNet model, and used to capture the difference between the images generated based on the content image and the style image. In addition, the VGG-19 model was trained on a very large ImageNet database, aiming to improve its ability to identify image features of any dimension. Finally, several experiments were conducted to compare our method and several existing methods. The results show that the photographic images generated by our image retain the features of inputted content and style images, and minimizes the discrepancy between content and style.

Cite

CITATION STYLE

APA

Premamayudu, B., Subbarao, P., & Rao, K. V. (2019). Improved artistic images generation using transfer learning. Revue d’Intelligence Artificielle, 33(4), 299–304. https://doi.org/10.18280/ria.330406

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free