The automatic 3D reconstruction of indoor scenes is of great significance in the application of 3D-scene understanding. The existing methods have poor resilience to the incomplete and noisy point cloud, which leads to low-quality results and tedious post-processing. Therefore, the objective of this work is to automatically reconstruct indoor scenes from an incomplete and noisy point-cloud base on semantics and primitives. In this paper, we propose a semantics-and-primitives-guided indoor 3D reconstruction method. Firstly, a local, fully connected graph neural network is designed for semantic segmentation. Secondly, based on the enumerable features of indoor scenes, a primitive-based reconstruction method is proposed, which retrieves the most similar model in a 3D-ESF indoor model library by using ESF descriptors and semantic labels. Finally, a coarse-to-fine registration method is proposed to register the model into the scene. The results indicate that our method can achieve high-quality results while remaining better resilience to the incompleteness and noise of point cloud. It is concluded that the proposed method is practical and is able to automatically reconstruct the indoor scene from the point cloud with incompleteness and noise.
CITATION STYLE
Wang, T., Wang, Q., Ai, H., & Zhang, L. (2022). Semantics-and-Primitives-Guided Indoor 3D Reconstruction from Point Clouds. Remote Sensing, 14(19). https://doi.org/10.3390/rs14194820
Mendeley helps you to discover research relevant for your work.