Distinguishing emotional responses to photographs and artwork using a deep learning-based approach

18Citations
Citations of this article
41Readers
Mendeley users who have this article in their library.

Abstract

Visual stimuli from photographs and artworks raise corresponding emotional responses. It is a long process to prove whether the emotions that arise from photographs and artworks are different or not. We answer this question by employing electroencephalogram (EEG)-based biosignals and a deep convolutional neural network (CNN)-based emotion recognition model. We employ Russell’s emotion model, which matches emotion keywords such as happy, calm or sad to a coordinate system whose axes are valence and arousal, respectively. We collect photographs and artwork images that match the emotion keywords and build eighteen one-minute video clips for nine emotion keywords for photographs and artwork. We hired forty subjects and executed tests about the emotional responses from the video clips. From the t-test on the results, we concluded that the valence shows difference, while the arousal does not.

Cite

CITATION STYLE

APA

Yang, H., Han, J., & Min, K. (2019). Distinguishing emotional responses to photographs and artwork using a deep learning-based approach. Sensors (Switzerland), 19(24). https://doi.org/10.3390/s19245533

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free