From Sounds to Music and Emotions

  • Schubert E
  • Ferguson S
  • Farrar N
  • et al.
N/ACitations
Citations of this article
55Readers
Mendeley users who have this article in their library.

Abstract

Recent instruments measuring continuous self-reported emotion responses to music have tended to use dimensional rating scale models of emotion such as valence (happy to sad). However, numerous retrospective studies of emotion in music use checklist style responses, usually in the form of emotion words, (such as happy, angry, sad…) or facial expressions. A response interface based on six simple sketch style emotion faces aligned into a clock-like distribution was developed with the aim of allowing participants to quickly and easily rate emotions in music continuously as the music unfolded. We tested the interface using six extracts of music, one targeting each of the six faces: ‘Excited’ (at 1 o’clock), ‘Happy’ (3), ‘Calm’ (5), ‘Sad’ (7), ‘Scared’ (9) and ‘Angry’ (11). 30 participants rated the emotion expressed by these excerpts on our ‘emotion-face-clock’. By demonstrating how continuous category selections (votes) changed over time, we were able to show that (1) more than one emotion-face could be expressed by music at the same time and (2) the emotion face that best portrayed the emotion the music conveyed could change over time, and (3) the change could be attributed to changes in musical structure. Implications for research on orientation time and mixed emotions are discussed.

Cite

CITATION STYLE

APA

Schubert, E., Ferguson, S., Farrar, N., Taylor, D., & McPherson, G. E. (2013). From Sounds to Music and Emotions. From Sounds to Music and Emotions (Vol. 7900, pp. 1–18).

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free