High performance “non-local” generic face reconstruction model using the lightweight Speckle-Transformer (SpT) UNet

5Citations
Citations of this article
6Readers
Mendeley users who have this article in their library.

Abstract

Significant progress has been made in computational imaging (CI), in which deep convolutional neural networks (CNNs) have demonstrated that sparse speckle patterns can be reconstructed. However, due to the limited “local” kernel size of the convolutional operator, for the spatially dense patterns, such as the generic face images, the performance of CNNs is limited. Here, we propose a “non-local ” model, termed the Speckle-Transformer (SpT) UNet, for speckle feature extraction of generic face images. It is worth noting that the lightweight SpT UNet reveals a high efficiency and strong compar-ative performance with Pearson Correlation Coefficient (PCC), and structural similarity measure (SSIM) exceeding 0.989, and 0.950, respectively.

Cite

CITATION STYLE

APA

Wang, Y., Wang, H., & Gu, M. (2023). High performance “non-local” generic face reconstruction model using the lightweight Speckle-Transformer (SpT) UNet. Opto-Electronic Advances, 6(2). https://doi.org/10.29026/oea.2023.220049

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free