SemSegDepth: A Combined Model for Semantic Segmentation and Depth Completion

3Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Holistic scene understanding is pivotal for the performance of autonomous machines. In this paper we propose a new end-to-end model for performing semantic segmentation and depth completion jointly. The vast majority of recent approaches have developed semantic segmentation and depth completion as independent tasks. Our approach relies on RGB and sparse depth as inputs to our model and produces a dense depth map and the corresponding semantic segmentation image. It consists of a feature extractor, a depth completion branch, a semantic segmentation branch and a joint branch which further processes semantic and depth information altogether. The experiments done on Virtual KITTI 2 dataset, demonstrate and provide further evidence, that combining both tasks, semantic segmentation and depth completion, in a multi-task network can effectively improve the performance of each task. Code is available at https://github.com/juanb09111/semantic depth.

Cite

CITATION STYLE

APA

Lagos, J. P., & Rahtu, E. (2022). SemSegDepth: A Combined Model for Semantic Segmentation and Depth Completion. In Proceedings of the International Joint Conference on Computer Vision, Imaging and Computer Graphics Theory and Applications (Vol. 5, pp. 155–165). Science and Technology Publications, Lda. https://doi.org/10.5220/0010838500003124

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free