Image captioning with sentiment terms via weakly-supervised sentiment dataset

11Citations
Citations of this article
30Readers
Mendeley users who have this article in their library.

Abstract

Image captioning task has become a highly competitive research area with successful application of convolutional and recurrent neural networks, especially with the advent of long short-term memory (LSTM) architecture. However, its primary focus has been a factual description of the images, including the objects, movements, and their relations. While such focus has demonstrated competence, describing the images along with nonfactual elements, namely sentiments of the images expressed via adjectives, has mostly been neglected. We attempt to address this issue by fine-tuning an additional convolutional neural network solely devoted to sentiments, where dataset on sentiment is built from a data-driven, multi-label approach. Our experimental results show that our method can generate image captions with sentiment terms that are more compatible with the images than solely relying on features devoted to object classification, while capable of preserving the semantics.

Cite

CITATION STYLE

APA

Shin, A., Ushiku, Y., & Harada, T. (2016). Image captioning with sentiment terms via weakly-supervised sentiment dataset. In British Machine Vision Conference 2016, BMVC 2016 (Vol. 2016-September, pp. 53.1-53.12). British Machine Vision Conference, BMVC. https://doi.org/10.5244/C.30.53

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free