Multimodal object classification models inspired by multisensory integration in the brain

5Citations
Citations of this article
19Readers
Mendeley users who have this article in their library.

Abstract

Two multimodal classification models aimed at enhancing object classification through the integration of semantically congruent unimodal stimuli are introduced. The feature-integrating model, inspired by multisensory integration in the subcortical superior colliculus, combines unimodal features which are subsequently classified by a multimodal classifier. The decision-integrating model, inspired by integration in primary cortical areas, classifies unimodal stimuli independently using unimodal classifiers and classifies the combined decisions using a multimodal classifier. The multimodal classifier models are implemented using multilayer perceptrons and multivariate statistical classifiers. Experiments involving the classification of noisy and attenuated auditory and visual representations of ten digits are designed to demonstrate the properties of the multimodal classifiers and to compare the performances of multimodal and unimodal classifiers. The experimental results show that the multimodal classification systems exhibit an important aspect of the “inverse effectiveness principle” by yielding significantly higher classification accuracies when compared with those of the unimodal classifiers. Furthermore, the flexibility offered by the generalized models enables the simulations and evaluations of various combinations of multimodal stimuli and classifiers under varying uncertainty conditions.

References Powered by Scopus

Distributed hierarchical processing in the primate cerebral cortex

5597Citations
N/AReaders
Get full text

Hearing lips and seeing voices

4583Citations
N/AReaders
Get full text

Ventriloquist Effect Results from Near-Optimal Bimodal Integration

1467Citations
N/AReaders
Get full text

Cited by Powered by Scopus

Fusion models for generalized classification of multiaxial human movement: Validation in sport performance

12Citations
N/AReaders
Get full text

Optimal time window for the integration of spatial audio-visual information in virtual environments

2Citations
N/AReaders
Get full text

CINET: A brain-inspired deep learning context-integrating neural network model for resolving ambiguous stimuli

1Citations
N/AReaders
Get full text

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Cite

CITATION STYLE

APA

Amerineni, R., Gupta, R. S., & Gupta, L. (2019). Multimodal object classification models inspired by multisensory integration in the brain. Brain Sciences, 9(1). https://doi.org/10.3390/brainsci9010003

Readers' Seniority

Tooltip

PhD / Post grad / Masters / Doc 7

70%

Professor / Associate Prof. 1

10%

Lecturer / Post doc 1

10%

Researcher 1

10%

Readers' Discipline

Tooltip

Neuroscience 4

44%

Computer Science 3

33%

Nursing and Health Professions 1

11%

Engineering 1

11%

Article Metrics

Tooltip
Mentions
Blog Mentions: 1

Save time finding and organizing research with Mendeley

Sign up for free