Emotion prediction of sound events based on transfer learning

1Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Processing generalized sound events with the purpose of predicting the emotion they might evoke is a relatively young research field. Tools, datasets, and methodologies to address such a challenging task are still under development, far from any standardized format. This work aims to cover this gap by revealing and exploiting potential similarities existing during the perception of emotions evoked by sound events and music. o this end we propose (a) the usage of temporal modulation features and (b) a transfer learning module based on an Echo State Network assisting the prediction of valence and arousal measurements associated with generalized sound events. The effectiveness of the proposed transfer learning solution is demonstrated after a thoroughly designed experimental phase employing both sound and music data. The results demonstrate the importance of transfer learning in the specific field and encourage further research on approaches which manage the problem in a cooperative way.

Cite

CITATION STYLE

APA

Ntalampiras, S., & Potamitis, I. (2017). Emotion prediction of sound events based on transfer learning. In Communications in Computer and Information Science (Vol. 744, pp. 303–313). Springer Verlag. https://doi.org/10.1007/978-3-319-65172-9_26

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free