Abstract
This paper approaches the problem of finding correspondences between images in which there are large changes in viewpoint, scale and illumination. Recent work has shown that scale-space `interest points \&\#039; may be found with good repeatability in spite of such changes. Furthermore, the high entropy of the surrounding image regions means that local descriptors are highly discriminative for matching. For descriptors at interest points to be robustly matched between images, they must be as far as possible invariant to the imaging process. In this work we introduce a family of features which use groups of interest points to form geometrically invariant descriptors of image regions. Feature descriptors are formed by resampling the image relative to canonical frames defined by the points. In addition to robust matching, a key advantage of this approach is that each match implies a hypothesis of the local 2D (projective) transformation. This allows us to immediately reject most of the false matches using a Hough transform. We reject remaining outliers using RANSAC and the epipolar constraint. Results show that dense feature matching can be achieved in a few seconds of computation on 1GHz Pentium III machines. 1
Cite
CITATION STYLE
Brown, M., & Lowe, D. (2013). Invariant Features from Interest Point Groups (pp. 23.1-23.10). British Machine Vision Association and Society for Pattern Recognition. https://doi.org/10.5244/c.16.23
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.