Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving

1Citations
Citations of this article
1Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Amodal perception is the ability to hallucinate full shapes of (partially) occluded objects. While natural to humans, learning-based perception methods often only focus on the visible parts of scenes. This constraint is critical for safe automated driving since detection capabilities of perception methods are limited when faced with (partial) occlusions. Moreover, corner cases can emerge from occlusions while the perception method is oblivious. In this work, we investigate the possibilities of joint prediction of amodal and visible semantic segmentation masks. More precisely, we investigate whether both perception tasks benefit from a joint training approach. We report our findings on both the Cityscapes and the Amodal Cityscapes dataset. The proposed joint training outperforms the separately trained networks in terms of mean intersection over union in amodal areas of the masks by 6.84 % absolute, while even slightly improving the visible segmentation performance.

Cite

CITATION STYLE

APA

Breitenstein, J., Löhdefink, J., & Fingscheidt, T. (2023). Joint Prediction of Amodal and Visible Semantic Segmentation for Automated Driving. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 13801 LNCS, pp. 633–645). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-031-25056-9_40

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free