Performance analysis of acoustic emotion recognition for in-car conversational interfaces

26Citations
Citations of this article
33Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

The automotive industry are integrating more technologies into the standard new car kit. New cars often provide speech enabled communications such as voice-dial, as well as control over the car cockpit including entertainment systems, climate and satellite navigation. In addition there is the potential for a richer interaction between driver and car by automatically recognising the emotional state of the driver and responding intelligently and appropriately. Driver emotion and driving performance are often intrinsically linked and knowledge of the driver emotion can enable to the car to support die driving experience and encourage better driving. Automatically recognising driver emotion is a challenge and this paper presents a performance analysis of our in-car acoustic emotion recognition system. © Springer-Verlag Berlin Heidelberg 2007.

Cite

CITATION STYLE

APA

Jones, C. M., & Jonsson, I. M. (2007). Performance analysis of acoustic emotion recognition for in-car conversational interfaces. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 4555 LNCS, pp. 411–420). Springer Verlag. https://doi.org/10.1007/978-3-540-73281-5_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free