Deep bilinear learning for RGB-D action recognition

22Citations
Citations of this article
140Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, we focus on exploring modality-temporal mutual information for RGB-D action recognition. In order to learn time-varying information and multi-modal features jointly, we propose a novel deep bilinear learning framework. In the framework, we propose bilinear blocks that consist of two linear pooling layers for pooling the input cube features from both modality and temporal directions, separately. To capture rich modality-temporal information and facilitate our deep bilinear learning, a new action feature called modality-temporal cube is presented in a tensor structure for characterizing RGB-D actions from a comprehensive perspective. Our method is extensively tested on two public datasets with four different evaluation settings, and the results show that the proposed method outperforms the state-of-the-art approaches.

Cite

CITATION STYLE

APA

Hu, J. F., Zheng, W. S., Pan, J., Lai, J., & Zhang, J. (2018). Deep bilinear learning for RGB-D action recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11211 LNCS, pp. 346–362). Springer Verlag. https://doi.org/10.1007/978-3-030-01234-2_21

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free