Efficient Content Adaptive Plenoptic Video Coding

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

This article is free to access.

Abstract

In this paper, a content adaptive coding method for plenoptic video is proposed to reduce both of the spatial and temporal redundancy. Based on the spatial correlations among subapertures, the plenoptic video is divided into two subaperture groups: the central view videos and the residual videos. Coding methods, multiview coding method based on spatial correlation(S-MVC) and multiview coding method based on temporal correlation(T-MVC), are adopted for residual videos according to correlation and residual energy analysis, while the central view videos are always encoded by S-MVC. Correlation analysis is performed first, and residual energy analysis is further performed if necessary. Coding the central view videos and the residual videos separately reduces bitstream, choosing different encoding methods for residual videos makes full use of the characteristics of different video content. Both the central view videos and the residual videos are compressed by MV-HEVC. The experimental results demonstrate that the proposed method outperforms a pseudo-sequence-based MVC method for plenoptic video by an average of 25.06% bitrate saving.

Cite

CITATION STYLE

APA

Tu, W., Jin, X., Li, L., Yan, C., Sun, Y., Xiao, M., … Zhang, J. (2020). Efficient Content Adaptive Plenoptic Video Coding. IEEE Access, 8, 5797–5804. https://doi.org/10.1109/ACCESS.2020.2964056

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free