Deep convolutional neural networks have shown outstanding performance in the task of semantically segmenting images. Applying the same methods on 3D data still poses challenges due to the heavy memory requirements and the lack of structured data. Here, we propose LatticeNet, a novel approach for 3D semantic segmentation, which takes raw point clouds as input. A PointNet describes the local geometry which we embed into a sparse permutohedral lattice. The lattice allows for fast convolutions while keeping a low memory footprint. Further, we introduce DeformSlice, a novel learned data-dependent interpolation for projecting lattice features back onto the point cloud. We present results of 3D segmentation on multiple datasets where our method achieves state-of-the-art performance. We also extend and evaluate our network for instance and dynamic object segmentation.
CITATION STYLE
Rosu, R. A., Schütt, P., Quenzel, J., & Behnke, S. (2022). LatticeNet: fast spatio-temporal point cloud segmentation using permutohedral lattices. Autonomous Robots, 46(1), 45–60. https://doi.org/10.1007/s10514-021-09998-1
Mendeley helps you to discover research relevant for your work.