Translating aerial images into street-map-like representations for visual self-localization of uavs

15Citations
Citations of this article
20Readers
Mendeley users who have this article in their library.

Abstract

Unmanned aerial vehicles (UAVs) rely on global navigation satellite systems (GNSS) like the Global Positioning System (GPS) for navigation but GNSS signals can be easily jammed. Therefore, we propose a visual localization method that uses a camera and data from Open Street Maps in order to replace GNSS. First, the aerial imagery from the onboard camera is translated into a map-like representation. Then we match it with a reference map to infer the vehicle's position. An experiment over a typical sized mission area shows localization accuracy close to commercial GPS. Compared to previous methods ours is applicable to a broader range of scenarios. It can incorporate multiple types of landmarks like roads and buildings and it outputs absolute positions with higher frequency and confidence and can be used at altitudes typical for commercial UAVs. Our results show that the proposed method can serve as a backup to GNSS systems where suitable landmarks are available.

Cite

CITATION STYLE

APA

Schleiss, M. (2019). Translating aerial images into street-map-like representations for visual self-localization of uavs. In International Archives of the Photogrammetry, Remote Sensing and Spatial Information Sciences - ISPRS Archives (Vol. 42, pp. 575–580). International Society for Photogrammetry and Remote Sensing. https://doi.org/10.5194/isprs-archives-XLII-2-W13-575-2019

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free