Multi-view instance matching with learned geometric soft-constraints

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

We present a new approach for matching urban object instances across multiple ground-level images for the ultimate goal of city-scale mapping of objects with high positioning accuracy. What makes this task challenging is the strong change in view-point, different lighting conditions, high similarity of neighboring objects, and variability in scale. We propose to turn object instance matching into a learning task, where image-appearance and geometric relationships between views fruitfully interact. Our approach constructs a Siamese convolutional neural network that learns to match two views of the same object given many candidate image cut-outs. In addition to image features, we propose utilizing location information about the camera and the object to support image evidence via soft geometric constraints. Our method is compared to existing patch matching methods to prove its edge over state-of-the-art. This takes us one step closer to the ultimate goal of city-wide object mapping from street-level imagery to benefit city administration.

Cite

CITATION STYLE

APA

Nassar, A. S., Lefèvre, S., & Wegner, J. D. (2020). Multi-view instance matching with learned geometric soft-constraints. ISPRS International Journal of Geo-Information, 9(11). https://doi.org/10.3390/ijgi9110687

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free