Convolutional neural networks for pose recognition in binary omni-directional images

1Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this work, we present a methodology for pose classification of silhouettes using convolutional neural networks. The training set consists exclusively from the synthetic images that are generated from three-dimensional (3D) human models, using the calibration of an omni-directional camera (fish-eye). Thus, we are able to generate a large volume of training set that is usually required for Convolutional Neural Networks (CNNs). Testing is performed using synthetically generated silhouettes, as well as real silhouettes. This work is in the same realm with previous work utilizing Zernike image descriptors designed specifically for a calibrated fish-eye camera. Results show that the proposed method improves pose classification accuracy for synthetic images, but it is outperformed by our previously proposed Zernike descriptors in real silhouettes. The computational complexity of the proposed methodology is also examined and the corresponding results are provided.

Cite

CITATION STYLE

APA

Georgakopoulos, S. V., Kottari, K., Delibasis, K., Plagianakos, V. P., & Maglogiannis, I. (2016). Convolutional neural networks for pose recognition in binary omni-directional images. In IFIP Advances in Information and Communication Technology (Vol. 475, pp. 106–116). Springer New York LLC. https://doi.org/10.1007/978-3-319-44944-9_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free