LLGF-Net: Learning Local and Global Feature Fusion for 3D Point Cloud Semantic Segmentation

3Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Three-dimensional (3D) point cloud semantic segmentation is fundamental in complex scene perception. Currently, although various efficient 3D semantic segmentation networks have been proposed, the overall effect has a certain gap to 2D image segmentation. Recently, some transformer-based methods have opened a new stage in computer vision, which also has accelerated the effective development of methods in 3D point cloud segmentation. In this paper, we propose a novel semantic segmentation network named LLGF-Net that can aggregate features from both local and global levels of point clouds, effectively improving the ability to extract feature information from point clouds. Specifically, we adopt the multi-head attention mechanism in the original Transformer model to obtain the local features of point clouds and then use the position-distance information of point clouds in 3D space to obtain the global features. Finally, the local features and global features are fused and embedded into the encoder–decoder network to generate our method. Our extensive experimental results on the 3D point cloud dataset demonstrate the effectiveness and superiority of our method.

Cite

CITATION STYLE

APA

Zhang, J., Li, X., Zhao, X., & Zhang, Z. (2022). LLGF-Net: Learning Local and Global Feature Fusion for 3D Point Cloud Semantic Segmentation. Electronics (Switzerland), 11(14). https://doi.org/10.3390/electronics11142191

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free