Information fusion for perceptual feedback: A brain activity sonification approach

12Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

When analysing multichannel processes, it is often convenient to use some sort of visualisation to help understand and interpret spatio-temporal dependencies between the channels, and to perform input variable selection. This is particularly advantageous when the levels of noise are high, the active channel changes its spatial location with time, and also for spatio-temporal processes where several channels contain meaningful information, such as in the case of electroencephalogram (EEG)-based brain activity monitoring. To provide insight into the dynamics of brain electrical responses, spatial sonification of multichannel EEG is performed, whereby the information from active channels is fused into music-like audio. Owing to its data fusion via fission mode of operation, empirical mode decomposition (EMD) is employed as a time-frequency analyser, and the brain responses to visual stimuli are sonified to provide audio feedback. Such perceptual feedback has enormous potential in multimodal brain computer and brain machine interfaces (BCI/BMI). © 2008 Springer US.

Cite

CITATION STYLE

APA

Rutkowski, T. M., Cichocki, A., & Mandic, D. (2008). Information fusion for perceptual feedback: A brain activity sonification approach. In Signal Processing Techniques for Knowledge Extraction and Information Fusion (pp. 261–273). Springer US. https://doi.org/10.1007/978-0-387-74367-7_14

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free