An adaptive local descriptor embedding zernike moments for image matching

14Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Image matching is an important problem in computer vision and many technologies based on local descriptors have been developed. In this paper, we propose a novel local features descriptor based on an adaptive neighborhood and embedding Zernike moments. Instead of a fixed-size neighborhood, a size changeable neighborhood is introduced to detect the key-points and describe the features in the frame of Gaussian scale space. The radius is determined by the scale parameter of the key-point and the dominant direction is computed based on skew distribution fitting instead of the traditional eight-direction statistics. Then a 72-dimensional features vector based on a 3\times 3 grid is presented. A 19-dimensional vector consists of Zernike moments is applied to achieve better rotation invariance and finally contributes to a 91-dimensional descriptor. The accuracy and efficiency of proposed descriptor for image matching are verified by several numerical experiments.

Cite

CITATION STYLE

APA

Zhou, B., Duan, X. M., Wei, W., Ye, D. J., Wozniak, M., & Damasevicius, R. (2019). An adaptive local descriptor embedding zernike moments for image matching. IEEE Access, 7, 183971–183984. https://doi.org/10.1109/ACCESS.2019.2960203

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free