Emotion recognition from natural speech is a very challenging problem. The audio sub-challenge represents an initial step towards building an efficient audio-visual based emotion recognition system that can detect emotions for real life applications (i.e. human-machine interaction and/or communication). The SEMAINE database, which consists of emotionally colored conversations, is used as the benchmark database. This paper presents our emotion recognition system from speech information in terms of positive/negative valence, and high and low arousal, expectancy and power. We introduce a new set of features including Co-Occurrence matrix based features as well as frequency domain energy distribution based features. Comparisons between well-known prosodic and spectral features and the new features are presented. Classification using the proposed features has shown promising results compared to the classical features on both the development and test data sets. © 2011 Springer-Verlag.
CITATION STYLE
Sayedelahl, A., Fewzee, P., Kamel, M. S., & Karray, F. (2011). Audio-based emotion recognition from natural conversations based on co-occurrence matrix and frequency domain energy distribution features. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 6975 LNCS, pp. 407–414). https://doi.org/10.1007/978-3-642-24571-8_52
Mendeley helps you to discover research relevant for your work.