Affective classification model based on emotional user experience and visual markers in YouTube video

1Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

A video is composed of rendered elements such as text, audio, and visual elements. It may convey messages that emotionally engage viewers via embedded elements that demand visual attention, referred to as Visual Markers (VM). However, little attention has been paid to VM, particularly in terms of determining which VM influences viewers’ emotional experience. Lack of understanding of VM and its impact on viewers’ emotional experience may result in negative impact and hamper efficient video classification and filtering. This is crucial when, for instance, a YouTube video is used for malicious agend a. To fill this gap, this research was conducted to identify VM in Extremist YouTube Videos (EYV). It is helpful in determining significant viewers’ emotional responses upon watching EYV, and to develop an affective classification model based on emotional User Experience (UX) and VM in YouTube videos. The research conducted in Kansei evaluation using 20 YouTube video specimens with 80 respondents. Multivariate analysis was performed to determine the structure of emotions, the relationship between a VM and emotional responses, and classify the emotional responses and influential VM. The result has enabled this research to develop an affective classification model comprising three emotional dimensions; offensive, intrigue and awkward. The model contributes a new understanding of the body of knowledge of emotional evocative video elements and provides insights to authorities, policy makers, and other stakeholders to manage the classification of emotional evocative video. It could be used as a basis for formulating an algorithm to filter video content. Although the model was based on work under certain limitations, they lend some novelty by linking affect to VM in video classification. Future work could explore enhancing its applicability using wider scope and population of subjects and instruments. Additionally, video producers could extend the model in producing videos capable of invoking a targeted emotion to the viewers.

Cite

CITATION STYLE

APA

Rosli, R. M., Syed Aris, S. R., Aziz, A. A., Tsuchiya, T., & Lokman, A. M. (2021). Affective classification model based on emotional user experience and visual markers in YouTube video. International Journal of Advanced Technology and Engineering Exploration, 8(81), 970–988. https://doi.org/10.19101/IJATEE.2021.874291

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free