Stories for Images-in-Sequence by Using Visual and Narrative Components

7Citations
Citations of this article
31Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Recent research in AI is focusing towards generating narrative stories about visual scenes. It has the potential to achieve more human-like understanding than just basic description generation of images-in-sequence. In this work, we propose a solution for generating stories for images-in-sequence that is based on the Sequence to Sequence model. As a novelty, our encoder model is composed of two separate encoders, one that models the behaviour of the image sequence and other that models the sentence-story generated for the previous image in the sequence of images. By using the image sequence encoder we capture the temporal dependencies between the image sequence and the sentence-story and by using the previous sentence-story encoder we achieve a better story flow. Our solution generates long human-like stories that not only describe the visual context of the image sequence but also contains narrative and evaluative language. The obtained results were confirmed by manual human evaluation.

Cite

CITATION STYLE

APA

Smilevski, M., Lalkovski, I., & Madjarov, G. (2018). Stories for Images-in-Sequence by Using Visual and Narrative Components. In Communications in Computer and Information Science (Vol. 940, pp. 148–159). Springer Verlag. https://doi.org/10.1007/978-3-030-00825-3_13

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free