A Novel Image Captioning Method Based on Generative Adversarial Networks

1Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Although the image captioning methods based on RNN has made great progress in recent years, these are often lacking in variability and ignore some minor information. In this paper, a novel image captioning method based on Generative Adversarial Networks is proposed, which improve the naturalness and diversity of image description. In the method, matcher is added to the generator to get the feature of the image that does not appear in the standard description, then to produce descriptions conditioned on image, and discriminator to access how well a description fits the visual content. It is noteworthy that training a sequence generator is nontrivial. Experiments on MSCOCO and Flickr30k show that it performed competitively against real people in our user study and outperformed other methods on various tasks.

Author supplied keywords

Cite

CITATION STYLE

APA

Fan, Y., Xu, J., Sun, Y., & Wang, Y. (2019). A Novel Image Captioning Method Based on Generative Adversarial Networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11730 LNCS, pp. 281–292). Springer Verlag. https://doi.org/10.1007/978-3-030-30490-4_23

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free