Core to the vision-and-language navigation (VLN) challenge is building robust instruction representations and action decoding schemes, which can generalize well to previously unseen instructions and environments. In this paper, we report two simple but highly effective methods to address these challenges and lead to a new state-of-the-art performance. First, we adapt large-scale pretrained language models to learn text representations that generalize better to previously unseen instructions. Second, we propose a stochastic sampling scheme to reduce the considerable gap between the expert actions in training and sampled actions in test, so that the agent can learn to correct its own mistakes during long sequential action decoding. Combining the two techniques, we achieve a new state of the art on the Room-to-Room benchmark with 6% absolute gain over the previous best result (47% ! 53%) on the Success Rate weighted by Path Length metric.
CITATION STYLE
Li, X., Li, C., Xia, Q., Bisk, Y., Celikyilmaz, A., Gao, J., … Choi, Y. (2019). Robust navigation with language pretraining and stochastic sampling. In EMNLP-IJCNLP 2019 - 2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 1494–1499). Association for Computational Linguistics. https://doi.org/10.18653/v1/d19-1159
Mendeley helps you to discover research relevant for your work.