A unified evaluation framework for head motion prediction methods in 360° videos

14Citations
Citations of this article
14Readers
Mendeley users who have this article in their library.
Get full text

Abstract

The streaming transmissions of 360° videos is a major challenge for the development of Virtual Reality, and require a reliable head motion predictor to identify which region of the sphere to send in high quality and save data rate. Different head motion predictors have been proposed recently. Some of these works have similar evaluation metrics or even share the same dataset, however, none of them compare with each other. In this article we introduce an open software that enables to evaluate heterogeneous head motion prediction methods on various common grounds. The goal is to ease the development of new head/eye motion prediction methods. We first propose an algorithm to create a uniform data structure from each of the datasets. We also provide the description of the algorithms used to compute the saliency maps either estimated from the raw video content or from the users' statistics. We exemplify how to run existing approaches on customizable settings, and finally present the targeted usage of our open framework: how to train and evaluate a new prediction method, and compare it with existing approaches and baselines in common settings. The entire material (code, datasets, neural network weights and documentation) is publicly available.

Cite

CITATION STYLE

APA

Rondón, M. F. R., Sassatelli, L., Aparicio-Pardo, R., & Precioso, F. (2020). A unified evaluation framework for head motion prediction methods in 360° videos. In MMSys 2020 - Proceedings of the 2020 Multimedia Systems Conference (pp. 279–284). Association for Computing Machinery, Inc. https://doi.org/10.1145/3339825.3394934

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free