Depth Value Pre-Processing for Accurate Transfer Learning based RGB-D Object Recognition

2Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Object recognition is one of the important tasks in computer vision which has found enormous applications. Depth modality is proven to provide supplementary information to the common RGB modality for object recognition. In this paper, we propose methods to improve the recognition performance of an existing deep learning based RGB-D object recognition model, namely the FusionNet proposed by Eitel et al. First, we show that encoding the depth values as colorized surface normals is beneficial, when the model is initialized with weights learned from training on ImageNet data. Additionally, we show that the RGB stream of the FusionNet model can benefit from using deeper network architectures, namely the 16-layered VGGNet, in exchange for the 8-layered CaffeNet. In combination, these changes improves the recognition performance with 2.2% in comparison to the original FusionNet, when evaluating on the Washington RGB-D Object Dataset.

Cite

CITATION STYLE

APA

Aakerberg, A., Nasrollahi, K., Rasmussen, C. B., & Moeslund, T. B. (2017). Depth Value Pre-Processing for Accurate Transfer Learning based RGB-D Object Recognition. In International Joint Conference on Computational Intelligence (Vol. 1, pp. 121–128). Science and Technology Publications, Lda. https://doi.org/10.5220/0006511501210128

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free