Ultrawide Baseline Facade Matching for Geo-localization

15Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Matching street-level images to a database of airborne images is hard because of extreme viewpoint and illumination differences. Color/gradient distributions or local descriptors fail to match forcing us to rely on the structure of self-similarity of patterns on facades. We propose to capture this structure with a novel “scale-selective self-similarity” (S4 ) descriptor which is computed at each point on the facade at its inherent scale. To achieve this, we introduce a new method for scale selection which enables the extraction and segmentation of facades as well. We also introduce a novel geometric method that aligns satellite and bird’s-eye-view imagery to extract building facade regions in a stereo graph-cuts framework. Matching of the query facade to the database facade regions is done with a Bayesian classification of the street-view query S4 descriptors given all labeled descriptors in the bird’s-eye-view database. We also discuss geometric techniques for camera pose estimation using correspondence between building corners in the query and the matched aerial imagery. We show experimental results on retrieval accuracy on a challenging set of publicly available imagery and compare with standard SIFT-based techniques.

Cite

CITATION STYLE

APA

Bansal, M., Daniilidis, K., & Sawhney, H. (2016). Ultrawide Baseline Facade Matching for Geo-localization. In Advances in Computer Vision and Pattern Recognition (pp. 77–98). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-319-25781-5_5

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free