Siamese CNN-BILSTM architecture for 3D shape representation learning

45Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Learning a 3D shape representation from a collection of its rendered 2D images has been extensively studied. However, existing view-based techniques have not yet fully exploited the information among all the views of projections. In this paper, by employing recurrent neural network to efficiently capture features across different views, we propose a siamese CNN-BiLSTM network for 3D shape representation learning. The proposed method minimizes a discriminative loss function to learn a deep nonlinear transformation, mapping 3D shapes from the original space into a nonlinear feature space. In the transformed space, the distance of 3D shapes with the same label is minimized, otherwise the distance is maximized to a large margin. Specifically, the 3D shapes are first projected into a group of 2D images from different views. Then convolutional neural network (CNN) is adopted to extract features from different view images, followed by a bidirectional long short-term memory (LSTM) to aggregate information across different views. Finally, we construct the whole CNN-BiLSTM network into a siamese structure with contrastive loss function. Our proposed method is evaluated on two benchmarks, ModelNet40 and SHREC 2014, demonstrating superiority over the state-of-the-art methods.

Cite

CITATION STYLE

APA

Dai, G., Xie, J., & Fang, Y. (2018). Siamese CNN-BILSTM architecture for 3D shape representation learning. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 670–676). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/93

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free