Two-layer FoV prediction model for viewport dependent streaming of 360-degree videos

4Citations
Citations of this article
10Readers
Mendeley users who have this article in their library.
Get full text

Abstract

As the representative and most widely used content form of Virtual Reality (VR) application, omnidirectional videos provide immersive experience for users with 360-degree scenes rendered. Since only part of the omnidirectional video can be viewed at a time due to human’s eye characteristics, field of view (FoV) based transmission has been proposed by ensuring high quality in the FoV while reducing the quality out of that to lower the amount of transmission data. In this case, transient content quality reduction will occur when the user’s FoV changes, which can be improved by predicting the FoV beforehand. In this paper, we propose a two-layer model for FoV prediction. The first layer detects the heat maps of content in offline process, while the second layer predicts the FoV of a specific user online during his/her viewing period. We utilize a LSTM model to calculate the viewing probability of each region given the results from the first layer, the user’s previous orientations and the navigation speed. In addition, we set up a correction model to check and correct the unreasonable results. The performance evaluation shows that our model obtains higher accuracy and less undulation compared with widely used approaches.

Cite

CITATION STYLE

APA

Li, Y., Xu, Y., Xie, S., Ma, L., & Sun, J. (2019). Two-layer FoV prediction model for viewport dependent streaming of 360-degree videos. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering, LNICST (Vol. 262, pp. 501–509). Springer Verlag. https://doi.org/10.1007/978-3-030-06161-6_49

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free