Deep Multi View Spatio Temporal Spectral Feature Embedding on Skeletal Sign Language Videos for Recognition

4Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

To build a competitive global view from multiple views which will represent all the views within a class label is the primary objective of this work. The first phase involves the extraction of spatio temporal features from videos of skeletal sign language using a 3D convolutional neural network. In phase two, the extracted spatio temporal features are ensembled into a latent low dimensional subspace for embedding in the global view. This is achieved by learning the weights of the linear combination of Laplacian eigenmaps of multiple views. Subsequently, the constructed global view is applied as training data for sign language recognition.

Cite

CITATION STYLE

APA

Ali, S. A., Prasad, M. V. D., Kumar, P. P., & Kishore, P. V. V. (2022). Deep Multi View Spatio Temporal Spectral Feature Embedding on Skeletal Sign Language Videos for Recognition. International Journal of Advanced Computer Science and Applications, 13(4), 810–819. https://doi.org/10.14569/IJACSA.2022.0130494

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free