VocMatch: Efficient multiview correspondence for structure from motion

38Citations
Citations of this article
76Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Feature matching between pairs of images is a main bottleneck of structure-from-motion computation from large, unordered image sets. We propose an efficient way to establish point correspondences between all pairs of images in a dataset, without having to test each individual pair. The principal message of this paper is that, given a sufficiently large visual vocabulary, feature matching can be cast as image indexing, subject to the additional constraints that index words must be rare in the database and unique in each image. We demonstrate that the proposed matching method, in conjunction with a standard inverted file, is 2-3 orders of magnitude faster than conventional pairwise matching. The proposed vocabulary-based matching has been integrated into a standard SfM pipeline, and delivers results similar to those of the conventional method in much less time. © 2014 Springer International Publishing.

Cite

CITATION STYLE

APA

Havlena, M., & Schindler, K. (2014). VocMatch: Efficient multiview correspondence for structure from motion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 8691 LNCS, pp. 46–60). Springer Verlag. https://doi.org/10.1007/978-3-319-10578-9_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free