Previous work on visual storytelling mainly focused on exploring image sequence as evidence for storytelling and neglected textual evidence for guiding story generation. Motivated by human storytelling process which recalls stories for familiar images, we exploit textual evidence from similar images to help generate coherent and meaningful stories. To pick the images which may provide textual experience, we propose a two-step ranking method based on image object recognition techniques. To utilize textual information, we design an extended Seq2Seq model with two-channel encoder and attention. Experiments on the VIST dataset show that our method outperforms state-of-the-art baseline models without heavy engineering.
CITATION STYLE
Li, T., & Li, S. (2019). Incorporating textual evidence in visual storytelling. In DSNNLG 2019 - 1st Workshop on Discourse Structure in Neural NLG, Proceedings of the Workshop (pp. 13–17). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/w19-8102
Mendeley helps you to discover research relevant for your work.