Subtitles in VR 360° video. Results from an eye-tracking experiment

2Citations
Citations of this article
13Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

Virtual and Augmented Reality, collectively known as eXtended Reality, are key technologies for the next generation of human–computer–human interaction. In this context, 360° videos are becoming ubiquitous and especially suitable for providing immersive experiences thanks to the proliferation of affordable devices. This new medium has an untapped potential for the inclusion of modern subtitles to foster media content accessibility (Gejrot et al., 2021), e.g., for the deaf or hard-of-hearing people, and to also promote cultural inclusivity via language translation (Orero, 2022). Prior research on the presentation of subtitles in 360° videos relied on subjective methods and involved a small number of participants (Brown et al., 2018; Agulló, 2019; Oncins et al., 2020), leading to inconclusive results. The aim of this paper is to compare two conditions of subtitles in 360° videos: position (head-locked vs fixed) and colour (monochrome vs colour). Empirical analysis relies on novel triangulation of data from three complementary methods: psycho-physiological attentional process measures (eye movements), performance measures (media content comprehension), and subjective task-load and preferences (self-report measures). Results show that head-locked coloured subtitles are the preferred option.

Cite

CITATION STYLE

APA

Brescia-Zapata, M., Krejtz, K., Duchowski, A. T., Hughes, C. J., & Orero, P. (2023). Subtitles in VR 360° video. Results from an eye-tracking experiment. Perspectives: Studies in Translation Theory and Practice. https://doi.org/10.1080/0907676X.2023.2268122

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free