Reconstruction of multi-view video based on gan

0Citations
Citations of this article
2Readers
Mendeley users who have this article in their library.
Get full text

Abstract

There is a huge amount of data in multi-view video which brings enormous challenges to the compression, storage, and transmission of video data. Transmitting part of the viewpoint information is a prior solution to reconstruct the original multi-viewpoint information. They are all based on pixel matching to obtain the correlation between adjacent viewpoint images. However, pixels cannot express the invariability of image features and are susceptible to noise. Therefore, in order to overcome the above problems, the VGG network is used to extract the high-dimensional features between the images, indicating the relevance of the adjacent images. The GAN is further used to more accurately generate virtual viewpoint images. We extract the lines at the same positions of the viewpoints as local areas for image merging and input the local images into the network. In the reconstruction viewpoint, we generate a local image of a dense viewpoint through the GAN network. Experiments on multiple test sequences show that the proposed method has a 0.2–0.8-dB PSNR and 0.15–0.61 MOS improvement over the traditional method.

Cite

CITATION STYLE

APA

Li, S., Lan, C., & Zhao, T. (2018). Reconstruction of multi-view video based on gan. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11165 LNCS, pp. 618–629). Springer Verlag. https://doi.org/10.1007/978-3-030-00767-6_57

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free