Multi-modal deep network for RGB-D segmentation of clothes

8Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

In this Letter, the authors propose a deep learning based method to perform semantic segmentation of clothes from RGB-D images of people. First, they present a synthetic dataset containing more than 50,000 RGB-D samples of characters in different clothing styles, featuring various poses and environments for a total of nine semantic classes. The proposed data generation pipeline allows for fast production of RGB, depth images and ground-truth label maps. Secondly, a novel multi-modal encoder–ecoder convolutional network is proposed which operates on RGB and depth modalities. Multi-modal features are merged using trained fusion modules which use multi-scale atrous convolutions in the fusion process. The method is numerically evaluated on synthetic data and visually assessed on real-world data. The experiments demonstrate the efficiency of the proposed model over existing methods.

Cite

CITATION STYLE

APA

Joukovsky, B., Hu, P., & Munteanu, A. (2020). Multi-modal deep network for RGB-D segmentation of clothes. Electronics Letters, 56(9), 426–428. https://doi.org/10.1049/el.2019.4150

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free