Phi-LSTM: A phrase-based hierarchical LSTM model for image captioning

24Citations
Citations of this article
56Readers
Mendeley users who have this article in their library.
Get full text

Abstract

A picture is worth a thousand words. Not until recently, however, we noticed some success stories in understanding of visual scenes: a model that is able to detect/name objects, describe their attributes, and recognize their relationships/interactions. In this paper, we propose a phrase-based hierarchical Long Short-Term Memory (phi-LSTM) model to generate image description. The proposed model encodes sentence as a sequence of combination of phrases and words, instead of a sequence of words alone as in those conventional solutions. The two levels of this model are dedicated to (i) learn to generate image relevant noun phrases, and (ii) produce appropriate image description from the phrases and other words in the corpus. Adopting a convolutional neural network to learn image features and the LSTM to learn the word sequence in a sentence, the proposed model has shown better or competitive results in comparison to the state-of-the-art models on Flickr8k and Flickr30k datasets.

Cite

CITATION STYLE

APA

Tan, Y. H., & Chan, C. S. (2017). Phi-LSTM: A phrase-based hierarchical LSTM model for image captioning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 10115 LNCS, pp. 101–117). Springer Verlag. https://doi.org/10.1007/978-3-319-54193-8_7

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free