Stereoscopic video quality prediction based on end-to-end dual stream deep neural networks

15Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, we propose a no-reference stereoscopic video quality assessment (NR-SVQA) method based on an end-to-end dual stream deep neural network (DNN), which incorporates left and right view sub-networks. The end-to-end dual stream network takes image patch pairs from left and right view pivotal frames as inputs and evaluates the perceptual quality of each image patch pair. By combining multiple convolution, max-pooling and fully-connected layers with regression in the framework, distortion related features are learned end-to-end and purely data driven. Then, a spatiotemporal pooling strategy is employed on these image patch pairs to estimate the entire stereoscopic video quality. The proposed network architecture, which we name End-to-end Dual stream deep Neural network (EDN), is trained and tested on the well-known stereoscopic video dataset divided by reference videos. Experimental results demonstrate that our proposed method outperforms state-of-the-art algorithms.

Cite

CITATION STYLE

APA

Zhou, W., Chen, Z., & Li, W. (2018). Stereoscopic video quality prediction based on end-to-end dual stream deep neural networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11166 LNCS, pp. 482–492). Springer Verlag. https://doi.org/10.1007/978-3-030-00764-5_44

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free