Inertial sensor-aligned visual feature descriptors

37Citations
Citations of this article
88Readers
Mendeley users who have this article in their library.
Get full text

Abstract

We propose to align the orientation of local feature descriptors with the gravitational force measured with inertial sensors. In contrast to standard approaches that gain a reproducible feature orientation from the intensities of neighboring pixels to remain invariant against rotation, this approach results in clearly distinguishable descriptors for congruent features in different orientations. Gravity-aligned feature descriptors (GAFD) are suitable for any application relying on corresponding points in multiple images of static scenes and are particularly beneficial in the presence of differently oriented repetitive features as they are widespread in urban scenes and on man-made objects. In this paper, we show with different examples that the process of feature description and matching gets both faster and results in better matches when aligning the descriptors with the gravity compared to traditional techniques. © 2011 IEEE.

Cite

CITATION STYLE

APA

Kurz, D., & Ben Himane, S. (2011). Inertial sensor-aligned visual feature descriptors. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (pp. 161–166). IEEE Computer Society. https://doi.org/10.1109/CVPR.2011.5995339

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free