Learning global features by aggregating information over multiple views has been shown to be effective for 3D shape analysis. For view aggregation in deep learning models, pooling has been applied extensively. However, pooling leads to a loss of the content within views, and the spatial relationship among views, which limits the discriminability of learned features. We propose 3DViewGraph to resolve this issue, which learns 3D global features by more effectively aggregating unordered views with attention. Specifically, unordered views taken around a shape are regarded as view nodes on a view graph. 3DViewGraph first learns a novel latent semantic mapping to project low-level view features into meaningful latent semantic embeddings in a lower dimensional space, which is spanned by latent semantic patterns. Then, the content and spatial information of each pair of view nodes are encoded by a novel spatial pattern correlation, where the correlation is computed among latent semantic patterns. Finally, all spatial pattern correlations are integrated with attention weights learned by a novel attention mechanism. This further increases the discriminability of learned features by highlighting the unordered view nodes with distinctive characteristics and depressing the ones with appearance ambiguity. We show that 3DViewGraph outperforms state-of-the-art methods under three large-scale benchmarks.
CITATION STYLE
Han, Z., Wang, X., Vong, C. M., Liu, Y. S., Zwicker, M., & Philip Chen, C. L. (2019). 3DViewGraph: Learning global features for 3D shapes from a graph of unordered views with attention. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2019-August, pp. 758–765). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2019/107
Mendeley helps you to discover research relevant for your work.