Abstract
Automatic evaluations for natural language generation (NLG) conventionally rely on token-level or embedding-level comparisons with the text references. This is different from human language processing, for which visual imagination often improves comprehension. In this work, we propose IMAGINE, an imagination-based automatic evaluation metric for natural language generation. With the help of StableDiffusion (Rombach et al., 2022), a state-of-the-art text-to-image generator, we automatically generate an image as the embodied imagination for the text snippet and compute the imagination similarity using contextual embeddings. Experiments spanning several text generation tasks demonstrate that adding machine-generated images with our IMAGINE displays great potential in introducing multi-modal information into NLG evaluation, and improves existing automatic metrics’ correlations with human similarity judgments in both reference-based and reference-free evaluation scenarios.
Cite
CITATION STYLE
Zhu, W., Wang, X. E., Yan, A., Eckstein, M., & Wang, W. Y. (2023). IMAGINE: An Imagination-Based Automatic Evaluation Metric for Natural Language Generation. In EACL 2023 - 17th Conference of the European Chapter of the Association for Computational Linguistics, Findings of EACL 2023 (pp. 93–105). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2023.findings-eacl.6
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.