Semantic scene understanding is an important task for robots operating autonomously in real-world applications. Recent deep convolutional neural networks (CNNs) have demonstrated to be an effective approach for semantic image segmentation, especially for tasks where plenty of labeled data is available. However, many applications need to learn new specific classes but do not have a lot of labeled training data. This paper addresses the problem of transferring the knowledge from existing CNN models, e.g., from autonomous driving applications, to different classes and domains, e.g., different robotic platforms. Our work explores the two common transfer learning approaches for the particular problem of semantic segmentation: (1) fine-tuning existing models with the new training data, following a standard pipeline; (2) training a superpixel classifier using our proposed superpixel representation, which combines local and context information. We evaluate both approaches on three varied binary segmentation use cases from different domains. Our experiments demonstrate advantages and limitations from each alternative, showing that the proposed superpixel based strategies learn better models with limited amounts of labeled data.
CITATION STYLE
Cambra, A. B., Muñoz, A., & Murillo, A. C. (2018). How to transfer a semantic segmentation model from autonomous driving to other domains? In Advances in Intelligent Systems and Computing (Vol. 693, pp. 652–665). Springer Verlag. https://doi.org/10.1007/978-3-319-70833-1_53
Mendeley helps you to discover research relevant for your work.