Semantically Guided Geo-location and Modeling in Urban Environments

7Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The problem of localization and geo-location estimation of an image has a long-standing history both in robotics and computer vision. With the advent of availability of large amounts of geo-referenced image data, several image retrieval approaches have been deployed to tackle this problem. In this work, we will show how the capability of semantic labeling of both query views and the reference dataset by means of semantic segmentation can aid (1) the problem of retrieval of views similar and possibly overlapping with the query and (2) guide the recognition and discovery of commonly occurring scene layouts in the reference dataset. We will demonstrate the effectiveness of these semantic representations on examples of localization, semantic concept discovery, and intersection recognition in the images of urban scenes.

Cite

CITATION STYLE

APA

Singh, G., & Košecká, J. (2016). Semantically Guided Geo-location and Modeling in Urban Environments. In Advances in Computer Vision and Pattern Recognition (pp. 101–120). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-25781-5_6

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free