On the evaluation of vision-and-language navigation instructions

31Citations
Citations of this article
93Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Vision-and-Language Navigation wayfinding agents can be enhanced by exploiting automatically generated navigation instructions. However, existing instruction generators have not been comprehensively evaluated, and the automatic evaluation metrics used to develop them have not been validated. Using human wayfinders, we show that these generators perform on par with or only slightly better than a template-based generator and far worse than human instructors. Furthermore, we discover that BLEU, ROUGE, METEOR and CIDEr are ineffective for evaluating grounded navigation instructions. To improve instruction evaluation, we propose an instruction-trajectory compatibility model that operates without reference instructions. Our model shows the highest correlation with human wayfinding outcomes when scoring individual instructions. For ranking instruction generation systems, if reference instructions are available we recommend using SPICE.

Cite

CITATION STYLE

APA

Zhao, M., Anderson, P., Jain, V., Wang, S., Ku, A., Baldridge, J., & Ie, E. (2021). On the evaluation of vision-and-language navigation instructions. In EACL 2021 - 16th Conference of the European Chapter of the Association for Computational Linguistics, Proceedings of the Conference (pp. 1302–1316). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.eacl-main.111

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free