Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music

23Citations
Citations of this article
46Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

It has been long speculated that expression of emotions from different modalities have the same underlying 'code', whether it be a dance step, musical phrase, or tone of voice. This is the first attempt to implement this theory across three modalities, inspired by the polyvalence and repeatability of robotics. We propose a unifying framework to generate emotions across voice, gesture, and music, by representing emotional states as a 4-parameter tuple of speed, intensity, regularity, and extent (SIRE). Our results show that a simple 4-tuple can capture four emotions recognizable at greater than chance across gesture and voice, and at least two emotions across all three modalities. An application for multi-modal, expressive music robots is discussed. © 2012 Lim et al; licensee Springer.

Cite

CITATION STYLE

APA

Lim, A., Ogata, T., & Okuno, H. G. (2012). Towards expressive musical robots: A cross-modal framework for emotional gesture, voice and music. Eurasip Journal on Audio, Speech, and Music Processing, 2012(1). https://doi.org/10.1186/1687-4722-2012-3

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free