Adaptive saliency-weighted 2D-to-3D video conversion

0Citations
Citations of this article
3Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Creating 3D video content from existing 2D video has been stimulated by recent growth in 3DTV technologies. Depth cues from motion, focus, gradient, or texture shading are typically computed to create 3D world perception. More selective attention might be introduced using manual or automated methods for entertainment or educational purposes. In this paper, we propose an adaptive conversion framework that combines depth and visual saliency cues. A user study was designed and subjective quality scores on test videos were obtained using a tailored single stimulus continuous quality scale (SSCQS) method. The resulting mean opinion scores show that our method is favored by human observers in comparison to other state-of-the-art conversion methods.

Cite

CITATION STYLE

APA

Taher, H., Rushdi, M., Islam, M., & Badawi, A. (2015). Adaptive saliency-weighted 2D-to-3D video conversion. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 9257, pp. 737–748). Springer Verlag. https://doi.org/10.1007/978-3-319-23117-4_63

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free