Local descriptor methods are widely used in computer vision to compare local regions of images. These descriptors are often extracted relative to an estimated scale and rotation to provide invariance up to similarity transformations. The estimation of rotation and scale in local neighborhoods (also known as steering) is an imperfect process, however, and can produce errors downstream. In this paper, we propose an alternative to steering that we refer to as match-time covariance (MTC). MTC is a general strategy for descriptor design that simultaneously provides invariance in local neighborhood matches together with the associated aligning transformations. We also provide a general framework for endowing existing descriptors with similarity invariance through MTC. The framework, Similarity-MTC, is simple and dramatically improves accuracy. Finally, we propose NCC-S, a highly effective descriptor based on classic normalized cross-correlation, designed for fast execution in the Similarity-MTC framework. The surprising effectiveness of this very simple descriptor suggests that MTC offers fruitful research directions for image matching previously not accessible in the steering based paradigm.
CITATION STYLE
Christiansen, E., Rabaud, V., Ziegler, A., Kriegman, D., & Belongie, S. (2013). Match-time covariance for descriptors. In BMVC 2013 - Electronic Proceedings of the British Machine Vision Conference 2013. British Machine Vision Association, BMVA. https://doi.org/10.5244/C.27.12
Mendeley helps you to discover research relevant for your work.