Abstract
Precise emotion ground truth labels for 360? virtual reality (VR) video watching are essential for fne-grained predictions under varying viewing behavior. However, current annotation techniques either rely on post-stimulus discrete self-reports, or real-time, continuous emotion annotations (RCEA) but only for desktop/mobile settings. We present RCEA for 360? VR videos (RCEA-360VR), where we evaluate in a controlled study (N=32) the usability of two peripheral visualization techniques: HaloLight and DotSize. We furthermore develop a method that considers head movements when fusing labels. Using physiological, behavioral, and subjective measures, we show that (1) both techniques do not increase users' workload, sickness, nor break presence (2) our continuous valence and arousal annotations are consistent with discrete within-VR and original stimuli ratings (3) users exhibit high similarity in viewing behavior, where fused ratings perfectly align with intended labels.Our work contributes usable and efective techniques for collecting fne-grained viewport-dependent emotion labels in 360? VR.
Author supplied keywords
Cite
CITATION STYLE
Xue, T., Ali, A. E., & Zhang, T. (2021). Rcea-360vr: Real-time, continuous emotion annotation in 360? vr videos for collecting precise viewport-dependent ground truth labels. In Conference on Human Factors in Computing Systems - Proceedings. Association for Computing Machinery. https://doi.org/10.1145/3411764.3445487
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.