A Deep Learning Based Spatial Super-Resolution Approach for Light Field Content

7Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

One of the main issues encountered when working with Light Field technology is the trade-off between spatial and angular resolution. Various approaches have been presented to super-resolve light field in the spatial dimension. The challenge with current techniques is that they underuse the full potential of the information in the light field, as they process each sub-aperture image separately and do not consider disparity consistency. In this paper, we present a novel method for Light Field spatial super-resolution using the full four-dimensional light field spatial and angular information. We propose a learning-based model for spatial super-resolution which considers simultaneously all the sub-aperture images and also takes advantage of the epipolar image plane (EPI) information to ensure smooth disparity between the generated views and in turn construct high spatial resolution light field sub-aperture images. Experimental results show that our proposed method outperforms state-of-the-art light field super-resolution techniques.

Cite

CITATION STYLE

APA

Wafa, A., Pourazad, M. T., & Nasiopoulos, P. (2021). A Deep Learning Based Spatial Super-Resolution Approach for Light Field Content. IEEE Access, 9, 2080–2092. https://doi.org/10.1109/ACCESS.2020.3046577

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free