Deepshape: Deep learned shape descriptor for 3D shape matching and retrieval

122Citations
Citations of this article
178Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Complex geometric structural variations of 3D model usually pose great challenges in 3D shape matching and retrieval. In this paper, we propose a high-level shape feature learning scheme to extract features that are insensitive to deformations via a novel discriminative deep auto-encoder. First, a multiscale shape distribution is developed for use as input to the auto-encoder. Then, by imposing the Fisher discrimination criterion on the neurons in the hidden layer, we developed a novel discriminative deep auto-encoder for shape feature learning. Finally, the neurons in the hidden layers from multiple discriminative auto-encoders are concatenated to form a shape descriptor for 3D shape matching and retrieval. The proposed method is evaluated on the representative datasets that contain 3D models with large geometric variations, i.e., Mcgill and SHREC'10 ShapeGoogle datasets. Experimental results on the benchmark datasets demonstrate the effectiveness of the proposed method for 3D shape matching and retrieval.

Cite

CITATION STYLE

APA

Xie, J., Fang, Y., Zhu, F., & Wong, E. (2015). Deepshape: Deep learned shape descriptor for 3D shape matching and retrieval. In Proceedings of the IEEE Computer Society Conference on Computer Vision and Pattern Recognition (Vol. 07-12-June-2015, pp. 1275–1283). IEEE Computer Society. https://doi.org/10.1109/CVPR.2015.7298732

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free