Abstract
Lidar segmentation provides detailed information about the environment surrounding robots or autonomous vehicles. Current state-of-the-art neural networks for lidar segmentation are tailored to specific datasets. Changing the lidar sensor without retraining on a large annotated dataset from the new sensor results in a significant decrease in performance due to a ’’domain shift.” In this paper, we propose a new method for adapting lidar data to different domains by recreating annotated panoptic lidar datasets in the structure of a different lidar sensor. We minimize the domain gap by generating panoptic data from one domain in another and combining it with partially labeled data from the target domain. Our method improves the SemanticKITTI (Behley et al., 2019) to nuScenes (Caesar et al., 2020) domain adaptation performance by up to +51.5 mloU points, and the SemanticKITTI to nuScenes domain adaptation by up to +48.3 mloU. We compare two state-of-the-art methods for domain adaptation of lidar semantic segmentation to ours and demonstrate a significant improvement of up to +21.2 mloU over the previous best method. Furthermore we successfully train well performing semantic segmentation networks for two entirely unlabeled datasets of the state-of-the-art lidar sensors Velodyne Alpha Prime and InnovizTwo.
Author supplied keywords
Cite
CITATION STYLE
Hasecke, F., Colling, P., & Kummert, A. (2023). Fake It, Mix It, Segment It: Bridging the Domain Gap Between Lidar Sensors. In International Conference on Pattern Recognition Applications and Methods (Vol. 1, pp. 743–750). Science and Technology Publications, Lda. https://doi.org/10.5220/0011618500003411
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.