Improving Vision-and-Language Navigation with Image-Text Pairs from the Web

70Citations
Citations of this article
168Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Following a navigation instruction such as ‘Walk down the stairs and stop at the brown sofa’ requires embodied AI agents to ground referenced scene elements referenced (e.g. ‘stairs’) to visual content in the environment (pixels corresponding to ‘stairs’). We ask the following question – can we leverage abundant ‘disembodied’ web-scraped vision-and-language corpora (e.g. Conceptual Captions) to learn the visual groundings that improve performance on a relatively data-starved embodied perception task (Vision-and-Language Navigation)? Specifically, we develop VLN-BERT, a visiolinguistic transformer-based model for scoring the compatibility between an instruction (‘..stop at the brown sofa’) and a trajectory of panoramic RGB images captured by the agent. We demonstrate that pretraining VLN-BERT on image-text pairs from the web before fine-tuning on embodied path-instruction data significantly improves performance on VLN – outperforming prior state-of-the-art in the fully-observed setting by 4 absolute percentage points on success rate. Ablations of our pretraining curriculum show each stage to be impactful – with their combination resulting in further gains.

Cite

CITATION STYLE

APA

Majumdar, A., Shrivastava, A., Lee, S., Anderson, P., Parikh, D., & Batra, D. (2020). Improving Vision-and-Language Navigation with Image-Text Pairs from the Web. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12351 LNCS, pp. 259–274). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-58539-6_16

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free