We address the problem of end-to-end visual storytelling. Given a photo album, our model first selects the most representative (summary) photos, and then composes a natural language story for the album. For this task, we make use of the Visual Storytelling dataset and a model composed of three hierarchically-attentive Recurrent Neural Nets (RNNs) to: encode the album photos, select representative (summary) photos, and compose the story. Automatic and human evaluations show our model achieves better performance on selection, generation, and retrieval than baselines.
CITATION STYLE
Yu, L., Bansal, M., & Berg, T. L. (2017). Hierarchically-attentive RNN for album summarization and storytelling. In EMNLP 2017 - Conference on Empirical Methods in Natural Language Processing, Proceedings (pp. 966–971). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/d17-1101
Mendeley helps you to discover research relevant for your work.