Generating landmark navigation instructions from maps as a graph-to-text problem

15Citations
Citations of this article
67Readers
Mendeley users who have this article in their library.

Abstract

Car-focused navigation services are based on turns and distances of named streets, whereas navigation instructions naturally used by humans are centered around physical objects called landmarks. We present a neural model that takes OpenStreetMap representations as input and learns to generate navigation instructions that contain visible and salient landmarks from human natural language instructions. Routes on the map are encoded in a location- and rotation-invariant graph representation that is decoded into natural language instructions. Our work is based on a novel dataset of 7,672 crowd-sourced instances that have been verified by human navigation in Street View. Our evaluation shows that the navigation instructions generated by our system have similar properties as human-generated instructions, and lead to successful human navigation in Street View.

Cite

CITATION STYLE

APA

Schumann, R., & Riezler, S. (2021). Generating landmark navigation instructions from maps as a graph-to-text problem. In ACL-IJCNLP 2021 - 59th Annual Meeting of the Association for Computational Linguistics and the 11th International Joint Conference on Natural Language Processing, Proceedings of the Conference (pp. 489–502). Association for Computational Linguistics (ACL). https://doi.org/10.18653/v1/2021.acl-long.41

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free