Emotions in [a]: A perceptual and acoustic study

17Citations
Citations of this article
28Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The aim of this investigation is to study how well voice quality conveys emotional content that can be discriminated by human listeners and the computer. The speech data were produced by nine professional actors (four women, five men). The speakers simulated the following basic emotions in a unit consisting of a vowel extracted from running Finnish speech: neutral, sadness, joy, anger, and tenderness. The automatic discrimination was clearly more successful than human emotion recognition. Human listeners thus apparently need longer speech samples than vowel-length units for reliable emotion discrimination than the machine, which utilizes quantitative parameters effectively for short speech samples. © 2006 Taylor & Francis.

Cite

CITATION STYLE

APA

Toivanen, J., Waaramaa, T., Alku, P., Laukkanen, A. M., Seppänen, T., Väyrynen, E., & Airas, M. (2006). Emotions in [a]: A perceptual and acoustic study. Logopedics Phoniatrics Vocology, 31(1), 43–48. https://doi.org/10.1080/14015430500293926

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free