Match-time covariance for descriptors

1Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Local descriptor methods are widely used in computer vision to compare local regions of images. These descriptors are often extracted relative to an estimated scale and rotation to provide invariance up to similarity transformations. The estimation of rotation and scale in local neighborhoods (also known as steering) is an imperfect process, however, and can produce errors downstream. In this paper, we propose an alternative to steering that we refer to as match-time covariance (MTC). MTC is a general strategy for descriptor design that simultaneously provides invariance in local neighborhood matches together with the associated aligning transformations. We also provide a general framework for endowing existing descriptors with similarity invariance through MTC. The framework, Similarity-MTC, is simple and dramatically improves accuracy. Finally, we propose NCC-S, a highly effective descriptor based on classic normalized cross-correlation, designed for fast execution in the Similarity-MTC framework. The surprising effectiveness of this very simple descriptor suggests that MTC offers fruitful research directions for image matching previously not accessible in the steering based paradigm.

Cite

CITATION STYLE

APA

Christiansen, E., Rabaud, V., Ziegler, A., Kriegman, D., & Belongie, S. (2013). Match-time covariance for descriptors. In BMVC 2013 - Electronic Proceedings of the British Machine Vision Conference 2013. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.27.12

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free