GLSNet++: Global and Local-Stream Feature Fusion for LiDAR Point Cloud Semantic Segmentation Using GNN Demixing Block

10Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.

Abstract

Semantic point cloud segmentation is a critical task in 3-D computer vision, offering valuable contextual information for navigation, cartography, landmarks, object recognition, and building modeling. We developed global and local stream deep network (GLSNet++), an innovative deep learning architecture for robust context-dependent 3-D point cloud segmentation. GLSNet++ uniquely combines dual streams of global and local feature manifolds to capture multiscale contextual and structural information, addressing challenges due to highly varying object sizes in urban scenes. To effectively and efficiently refine mixed class labels from cross-scale global and local streams, GLSNet++ incorporates a novel graph neural network (GNN)-based demixing block (GDB) for accurately resolving class membership near voxel boundaries with spatial context-dependent feature fusion. We validate GLSNet++ on the IEEE DFT4 LiDAR dataset, achieving competitive city-scale semantic segmentation that can be extended to more classes, higher-resolution point clouds, and larger geographic regions. GLSNet++ exhibits strong generalization when tested on an independent LiDAR dataset from Columbia, Missouri evaluated using OpenStreetMap (OSM).

Cite

CITATION STYLE

APA

Bao, R., Palaniappan, K., Zhao, Y., & Seetharaman, G. (2024). GLSNet++: Global and Local-Stream Feature Fusion for LiDAR Point Cloud Semantic Segmentation Using GNN Demixing Block. IEEE Sensors Journal, 24(7), 11610–11624. https://doi.org/10.1109/JSEN.2023.3345747

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free