We address the problem of recognizing emotions from speech using features derived from emotional patterns. Because much work in the field focuses on using low-level acoustic features, we explicitly study whether high-level features are useful for classifying emotions. For this purpose, we convert a continuous speech signal to a discretized signal and extract discriminative patterns that are capable of distinguishing distinct emotions from each other. Extracted patterns are then used to create a feature set to be fed into a classifier. Experimental results show that patterns alone are good predictors of emotions. When used to build a classifier, pattern features achieve accuracy gains up to 25% compared to state-of-the-art acoustic features.
CITATION STYLE
Avci, U., Akkurt, G., & Unay, D. (2019). A pattern mining approach in feature extraction for emotion recognition from speech. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11658 LNAI, pp. 54–63). Springer Verlag. https://doi.org/10.1007/978-3-030-26061-3_6
Mendeley helps you to discover research relevant for your work.