Multiple view feature descriptors from image sequences via kernel principal component analysis

27Citations
Citations of this article
61Readers
Mendeley users who have this article in their library.

Abstract

We present a method for learning feature descriptors using multiple images, motivated by the problems of mobile robot navigation and localization. The technique uses the relative simplicity of small baseline tracking in image sequences to develop descriptors suitable for the more challenging task of wide baseline matching across significant viewpoint changes. The variations in the appearance of each feature are learned using kernel principal component analysis (KPCA) over the course of image sequences. An approximate version of KPCA is applied to reduce the computational complexity of the algorithms and yield a compact representation. Our experiments demonstrate robustness to wide appearance variations on non-planar surfaces, including changes in illumination, viewpoint, scale, and geometry of the scene. © Springer-Verlag 2004.

Cite

CITATION STYLE

APA

Meltzer, J., Yang, M. H., Gupta, R., & Soatto, S. (2004). Multiple view feature descriptors from image sequences via kernel principal component analysis. Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics), 3021, 215–227. https://doi.org/10.1007/978-3-540-24670-1_17

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free