The iCub Multisensor Datasets for Robot and Computer Vision Applications

0Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Multimodal information can significantly increase the perceptual capabilities of robotic agents, at the cost of a more complex sensory processing. This complexity can be reduced by employing machine learning techniques, provided that there is enough meaningful data to train on. This paper reports on creating novel datasets constructed by employing the iCub robot equipped with an additional depth sensor and color camera. We used the robot to acquire color and depth information for 210 objects in different acquisition scenarios. At the end, the results were large scale datasets that can be used for robot and computer vision applications: multisensory object representation, action recognition, rotation and distance invariant object recognition.

Cite

CITATION STYLE

APA

Kirtay, M., Albanese, U., Vannucci, L., Schillaci, G., Laschi, C., & Falotico, E. (2020). The iCub Multisensor Datasets for Robot and Computer Vision Applications. In ICMI 2020 - Proceedings of the 2020 International Conference on Multimodal Interaction (pp. 685–688). Association for Computing Machinery, Inc. https://doi.org/10.1145/3382507.3418847

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free