Spatial compressive imaging deep learning framework using joint input of multi-frame measurements and degraded maps

  • Cui C
  • Ke J
9Citations
Citations of this article
8Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Traditional compressive imaging reconstruction is often based on an iterative approach, which costs much time. To deal with the issue, a couple of groups have used deep learning for reconstruction to ensure low running time with good performance. However, the excessive dependence on data and network structure also creates a network with a lack of flexibility and interpretation. Such networks are often inapplicable when compression ratios are high. In order to solve these issues, we study an end-to-end network Joinput-CiNet (joint input compressive imaging net). We use a tailored encoding module to make the imaging degradation model part of the network input. Then the network can obtain prior knowledge of the imaging system, thereby improving training efficiency and reconstruction performance. With five broadly used image datasets and experimentally collected infrared (IR) measurements, Joinput-CiNet demonstrates superior reconstruction performance at low compression rates such as 1:16 and 1:64 with fast speed compared with other networks.

Cite

CITATION STYLE

APA

Cui, C., & Ke, J. (2022). Spatial compressive imaging deep learning framework using joint input of multi-frame measurements and degraded maps. Optics Express, 30(2), 1235. https://doi.org/10.1364/oe.445127

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free