Research on visual-tactile cross-modality based on generative adversarial network

6Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Aiming at the research of assisted blind technology, a generative adversarial network model was proposed to complete the transformation of the mode from vision to touch. Firstly, two key representations of visual to tactile sense are identified: the texture image of the object and the audio frequency that generates vibrotactile. It is essentially a matter of generating audio from images. The authors propose a cross-modal network framework that generates corresponding vibrotactile signals based on texture images. More importantly, the network structure is an end-to-end, which eliminates the traditional intermediate form of converting texture image to spectrum image, and can directly carry out the transformation from visual to tactile. A quantitative evaluation system is proposed in this study, which can evaluate the performance of the network model. The experimental results show that the network can complete the conversion of visual information to tactile signals. The proposed method is proved to be superior to the existing method of indirectly generating vibrotactile signals, and the applicability of the model is verified.

Cite

CITATION STYLE

APA

Li, Y., Zhao, H., Liu, H., Lu, S., & Hou, Y. (2021). Research on visual-tactile cross-modality based on generative adversarial network. Cognitive Computation and Systems, 3(2), 131–141. https://doi.org/10.1049/ccs2.12008

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free