Emotion is a subjective, conscious experience when people face different kinds of stimuli. In this paper, we adopt Deep Canonical Correlation Analysis (DCCA) for high-level coordinated representation to make feature extraction from EEG and eye movement data. Parameters of the two views’ nonlinear transformations are learned jointly to maximize the correlation. We propose a multi-view emotion recognition framework and evaluate its effectiveness on three real world datasets. We found that DCCA efficiently learned representations with high correlation, which contributed to higher emotion classification accuracy. Our experiment results indicate that DCCA model is superior to the state-of-the-art methods with mean accuracies of 94.58% on SEED dataset, 87.45% on SEED IV dataset, and 88.51% and 84.98% for four classification and two dichotomies on DEAP dataset, respectively.
CITATION STYLE
Qiu, J. L., Liu, W., & Lu, B. L. (2018). Multi-view emotion recognition using deep canonical correlation analysis. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11305 LNCS, pp. 221–231). Springer Verlag. https://doi.org/10.1007/978-3-030-04221-9_20
Mendeley helps you to discover research relevant for your work.