Visual Data Simulation for Deep Learning in Robot Manipulation Tasks

1Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper introduces the usage of simulated images for training convolutional neural networks for object recognition and localization in the task of random bin picking. For machine learning applications, a limited amount of real world image data that can be captured and labeled for training and testing purposes is a big issue. In this paper, we focus on the use of realistic simulation of image data for training convolutional neural networks to be able to estimate the pose of an object. We can systematically generate varying camera viewpoint datasets with a various pose of an object and lighting conditions. After successful training and testing the neural network, we compare the performance of network trained on simulated images and images from a real camera capturing the physical object. The usage of the simulated data can speed up the complex and time-consuming task of gathering training data as well as increase robustness of object recognition by generating a bigger amount of data.

Cite

CITATION STYLE

APA

Surák, M., Košnar, K., Kulich, M., Kozák, V., & Přeučil, L. (2019). Visual Data Simulation for Deep Learning in Robot Manipulation Tasks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11472 LNCS, pp. 402–411). Springer Verlag. https://doi.org/10.1007/978-3-030-14984-0_29

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free