This study deals with the application of MFCC based models for both the recognition of emotional speech and the recognition of emotions in speech. More specifically it investigates the performance of phone-level models. First, results from performing forced alignment for the phonetic segmentation on GEMEP, a novel multimodal corpus of acted emotional utterances are presented, then the newly acquired segmentations are used for experiments with emotion recognition. © Springer-Verlag Berlin Heidelberg 2007.
CITATION STYLE
Pirker, H. (2007). Mixed feelings about using phoneme-level models in emotion recognition. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4738 LNCS, pp. 772–773). Springer Verlag. https://doi.org/10.1007/978-3-540-74889-2_92
Mendeley helps you to discover research relevant for your work.