Brain-inspired Multimodal Learning Based on Neural Networks

  • Liu C
  • Sun F
  • Zhang B
N/ACitations
Citations of this article
24Readers
Mendeley users who have this article in their library.

Abstract

Modern computational models have leveraged biological advances in human brain research. This study addresses the problem of multimodal learning with the help of brain-inspired models. Specifically, a unified multimodal learning architecture is proposed based on deep neural networks, which are inspired by the biology of the visual cortex of the human brain. This unified framework is validated by two practical multimodal learning tasks: image captioning, involving visual and natural language signals, and visual-haptic fusion, involving haptic and visual signals. Extensive experiments are conducted under the framework, and competitive results are achieved.

Cite

CITATION STYLE

APA

Liu, C., Sun, F., & Zhang, B. (2018). Brain-inspired Multimodal Learning Based on Neural Networks. Brain Science Advances, 4(1), 61–72. https://doi.org/10.26599/bsa.2018.9050004

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free