Depth based perceptual quality assessment for synthesised camera viewpoints

9Citations
Citations of this article
5Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper considers the visual quality assessment for view synthesis in the context of 3D video delivery chain. It is targeted to perceptually quantify the reconstruction quality of synthesised camera viewpoints. It is needed for developing better QoE models related to 3D-TV, as well as for a better representation of the effect of depth maps on views synthesis quality. In this paper, existing 2D video quality assessment methods, like PSNR and SSIM, are extended to assess the perceived quality of synthesised viewpoints based on the depth range. The performance of the extended assessment techniques is measured by correlating multiple sample video assessment scores to that of the Video Quality Metric (VQM) scores, which are a robust reflector of real subjective opinions. © 2012 ICST Institute for Computer Science, Social Informatics and Telecommunications Engineering.

Cite

CITATION STYLE

APA

Ekmekcioglu, E., Worrall, S., De Silva, D., Fernando, A., & Kondoz, A. M. (2012). Depth based perceptual quality assessment for synthesised camera viewpoints. In Lecture Notes of the Institute for Computer Sciences, Social-Informatics and Telecommunications Engineering (Vol. 60 LNICST, pp. 76–83). https://doi.org/10.1007/978-3-642-35145-7_10

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free