Generating automobile images dynamically from text description

0Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Synthesis of a realistic image from matching visual descriptions provided in the textual format is a challenge that has attracted attention in the recent research community in the field of artificial intelligence. Generation of the image from given text input is a problem, where given a text input, an image which matches text description must be generated. However, a relatively new class of convolutional neural networks referred to as generative adversarial networks (GANs) has provided compelling results in understanding textual features and generating high-resolution images. In this work, the main aim is to generate an automobile image from the given text input using generative adversarial networks and manipulate automobile colour using text-adaptive discriminator. This work involves creating a detailed text description of each image of a car to train the GAN model to produce images.

Cite

CITATION STYLE

APA

Sindhu, N., & Mamatha, H. R. (2021). Generating automobile images dynamically from text description. In Lecture Notes on Data Engineering and Communications Technologies (Vol. 53, pp. 197–211). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-15-5258-8_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free