RoViST: Learning Robust Metrics for Visual Storytelling

3Citations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Visual storytelling (VST) is the task of generating a story paragraph that describes a given image sequence. Most existing storytelling approaches have evaluated their models using traditional natural language generation metrics like BLEU or CIDEr. However, such metrics based on n-gram matching tend to have poor correlation with human evaluation scores and do not explicitly consider other criteria necessary for storytelling such as sentence structure or topic coherence. Moreover, a single score is not enough to assess a story as it does not inform us about what specific errors were made by the model. In this paper, we propose 3 evaluation metrics sets that analyses which aspects we would look for in a good story: 1) visual grounding, 2) coherence, and 3) nonredundancy. We measure the reliability of our metric sets by analysing its correlation with human judgement scores on a sample of machine stories obtained from 4 state-of-the-arts models trained on the Visual Storytelling Dataset (VIST). Our metric sets outperforms other metrics on human correlation, and could be served as a learning based evaluation metric set that is complementary to existing rule-based metrics.

Cite

CITATION STYLE

APA

Wang, E., Han, S. C., & Poon, J. (2022). RoViST: Learning Robust Metrics for Visual Storytelling. In Findings of the Association for Computational Linguistics: NAACL 2022 - Findings (pp. 2691–2702). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2022.findings-naacl.206

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free