Learning cross-modal visual-tactile representation using ensembled generative adversarial networks

16Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this study, the authors study a deep learning model that can convert vision into tactile information, so that different texture images can be fed back to the tactile signal close to the real tactile sensation after training and learning. This study focuses on the classification of different image visual information and its corresponding tactile feedback output mode. A training model of ensembled generative adversarial networks is proposed, which has the characteristics of simple training and stable efficiency of the result. At the same time, compared with the previous methods of judging the tactile output, in addition to subjective human perception, this study also provides an objective and quantitative evaluation system to verify the performance of the model. The experimental results show that the learning model can transform the visual information of the image into the tactile information, which is close to the real tactile sensation, and also verify the scientificity of the tactile evaluation method.

Cite

CITATION STYLE

APA

Li, X., Liu, H., Zhou, J., & Sun, F. (2019). Learning cross-modal visual-tactile representation using ensembled generative adversarial networks. Cognitive Computation and Systems, 1(2), 40–44. https://doi.org/10.1049/ccs.2018.0014

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free