Object Description Using Visual and Tactile Data

3Citations
Citations of this article
16Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

With the development of vision and haptic sensor technologies, robots have become increasingly capable of perceiving their external environment. Although machine vision and haptics have surpassed humans in some aspects of perception, it is difficult for robots to describe objects from multiple viewpoints using a combination of visual and haptic modalities. In this study, we use convolutional neural networks to separately extract visual and haptic features and then fuse these two types of features. Then, multitask learning is combined with multilabel classification to form a multitask-multilabel classification method. The developed method is used to identify the color, shape, material attributes, and class of an object from the visual-haptic fused feature vector. To verify the effectiveness of the proposed object description method, experiments are conducted on the PHAC-2 dataset and the collected VHAC dataset. The experimental results show that the proposed method produces the most accurate object descriptions with the smallest number of parameters.

Cite

CITATION STYLE

APA

Zhou, M., Zhang, P., Shan, D., Chen, Z., & Wang, X. (2022). Object Description Using Visual and Tactile Data. IEEE Access, 10, 54525–54536. https://doi.org/10.1109/ACCESS.2022.3174874

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free