Abstract
In this paper we introduce the task of abstractive caption or scene description compression. We describe a parallel dataset derived from the FLICKR30K and MSCOCO datasets. With this data we train an attention-based bidirectional LSTM recurrent neural network and compare the quality of its output to a Phrasebased Machine Translation (PBMT) model and a human generated short description. An extensive evaluation is done using automatic measures and human judgements. We show that the neural model outperforms the PBMT model. Additionally, we show that automatic measures are not very well suited for evaluating this text-to-text generation task.
Cite
CITATION STYLE
Wubben, S., Krahmer, E., Van Den Bosch, A., & Verberne, S. (2016). Abstractive compression of captions with attentive recurrent neural networks. In INLG 2016 - 9th International Natural Language Generation Conference, Proceedings of the Conference (pp. 41–50). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w16-6608
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.