Where are you? Localization from embodied dialog

14Citations
Citations of this article
98Readers
Mendeley users who have this article in their library.

Abstract

We present WHERE ARE YOU? (WAY), a dataset of ∼6k dialogs in which two humans - an Observer and a Locator - complete a cooperative localization task. The Observer is spawned at random in a 3D environment and can navigate from first-person views while answering questions from the Locator. The Locator must localize the Observer in a detailed top-down map by asking questions and giving instructions. Based on this dataset, we define three challenging tasks: Localization from Embodied Dialog or LED (localizing the Observer from dialog history), Embodied Visual Dialog (modeling the Observer), and Cooperative Localization (modeling both agents). In this paper, we focus on the LED task - providing a strong baseline model with detailed ablations characterizing both dataset biases and the importance of various modeling choices. Our best model achieves 32.7% success at identifying the Observer's location within 3m in unseen buildings, vs. 70.4% for human Locators.

Cite

CITATION STYLE

APA

Hahn, M., Krantz, J., Batra, D., Parikh, D., Rehg, J. M., Lee, S., & Anderson, P. (2020). Where are you? Localization from embodied dialog. In EMNLP 2020 - 2020 Conference on Empirical Methods in Natural Language Processing, Proceedings of the Conference (pp. 806–822). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2020.emnlp-main.59

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free