Calculation of Complex Zernike Moments with Geodesic Correction for Pose Recognition in Omni-directional Images

3Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

A number of Computer Vision and Artificial Intelligence applications are based on descriptors that are extracted from imaged objects. One widely used class of such descriptors are the invariant moments, with Zernike moments being reported as some of the most efficient descriptors. The calculation of image moments requires the definition of distance and angle of any pixel from the centroid pixel. While this is straightforward in images acquired by projective cameras, it is complicated and time consuming for omni-directional images obtained by fish-eye cameras. In this work, we provide an efficient way of calculating moment invariants in time domain from omni-directional images, using the calibration of the acquiring camera. The proposed implementation of the descriptors is assessed in the case of indoor video in terms of classification accuracy of the segmented human silhouettes. Numerical results are presented for different poses of human silhouettes and comparisons between the traditional and the proposed implementation of the Zernike moments are presented. The computational complexity for the proposed implementation is also provided.

Cite

CITATION STYLE

APA

Delibasis, K. K., Georgakopoulos, S., Plagianakos, V., & Maglogiannis, I. (2014). Calculation of Complex Zernike Moments with Geodesic Correction for Pose Recognition in Omni-directional Images. IFIP Advances in Information and Communication Technology, 436, 375–384. https://doi.org/10.1007/978-3-662-44654-6_37

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free