CS-MCNet: A Video Compressive Sensing Reconstruction Network with Interpretable Motion Compensation

N/ACitations
Citations of this article
6Readers
Mendeley users who have this article in their library.
Get full text

Abstract

In this paper, a deep neural network with interpretable motion compensation called CS-MCNet is proposed to realize high-quality and real-time decoding of video compressive sensing. Firstly, explicit multi-hypothesis motion compensation is applied in our network to extract correlation information of adjacent frames (as shown in Fig. 1), which improves the recover performance. And then, a residual module further narrows down the gap between reconstruction result and original signal. The overall architecture is interpretable by using algorithm unrolling, which brings the benefits of being able to transfer prior knowledge about the conventional algorithms. As a result, a PSNR of 22 dB can be achieved at 64x compression ratio, which is about 4 % to 9 % better than state-of-the-art methods. In addition, due to the feed-forward architecture, the reconstruction can be processed by our network in real time and up to three orders of magnitude faster than traditional iterative methods.

Cite

CITATION STYLE

APA

Huang, B., Zhou, J., Yan, X., Jing, M., Wan, R., & Fan, Y. (2021). CS-MCNet: A Video Compressive Sensing Reconstruction Network with Interpretable Motion Compensation. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 12623 LNCS, pp. 54–67). Springer Science and Business Media Deutschland GmbH. https://doi.org/10.1007/978-3-030-69532-3_4

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free