Unsupervised Style Control for Image Captioning

1Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose a novel unsupervised image captioning method. Image captioning involves two fields of deep learning, natural language processing and computer vision. The excessive pursuit of model evaluation results makes the caption style generated by the model too monotonous, which is difficult to meet people’s demands for vivid and stylized image captions. Therefore, we propose an image captioning model that combines text style transfer and image emotion recognition methods, with which the model can better understand images and generate controllable stylized captions. The proposed method can automatically judge the emotion contained in the image through the image emotion recognition module, better understand the image content, and control the description through the text style transfer method, thereby generating captions that meet people’s expectations. To our knowledge, this is the first work to use both image emotion recognition and text style control.

Cite

CITATION STYLE

APA

Tian, J., Yang, Z., & Shi, S. (2022). Unsupervised Style Control for Image Captioning. In Communications in Computer and Information Science (Vol. 1628 CCIS, pp. 413–424). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-981-19-5194-7_31

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free