Lidar–camera semi-supervised learning for semantic segmentation

15Citations
Citations of this article
18Readers
Mendeley users who have this article in their library.

Abstract

In this work, we investigated two issues: (1) How the fusion of lidar and camera data can improve semantic segmentation performance compared with the individual sensor modalities in a supervised learning context; and (2) How fusion can also be leveraged for semi-supervised learning in order to further improve performance and to adapt to new domains without requiring any additional labelled data. A comparative study was carried out by providing an experimental evaluation on networks trained in different setups using various scenarios from sunny days to rainy night scenes. The networks were tested for challenging, and less common, scenarios where cameras or lidars individually would not provide a reliable prediction. Our results suggest that semi-supervised learning and fusion techniques increase the overall performance of the network in challenging scenarios using less data annotations.

Cite

CITATION STYLE

APA

Caltagirone, L., Bellone, M., Svensson, L., Wahde, M., & Sell, R. (2021). Lidar–camera semi-supervised learning for semantic segmentation. Sensors, 21(14). https://doi.org/10.3390/s21144813

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free