Face image captured via surveillance videos in an open environment is usually of low quality, which seriously affects the visual quality and recognition accuracy. Most image super-resolution methods adopt paired high-quality and its interpolated low-resolution version to train the super-resolution network. It is difficult to achieve contented visual quality and restoring discriminative features in real scenarios. A discriminative self-attention cycle generative adversarial network is proposed for real-world face image super-resolution. Based on the cycle GAN framework, unpaired samples are adopted to train a degradation network and a reconstruction network simultaneously. A self-attention mechanism is employed to capture the contextual information for details restoring. A Siamese face recognition network is introduced to provide a constraint on identify consistency. In addition, an asymmetric perceptual loss is introduced to handle the imbalance between the degradation model and the reconstruction model. Experimental results show that the observation model achieved more realistic low-quality face images, and the super-resolved face images have shown better subjective quality and higher face recognition performance.
CITATION STYLE
Li, X., Dong, N., Huang, J., Zhuo, L., & Li, J. (2021). A discriminative self-attention cycle GAN for face super-resolution and recognition. IET Image Processing, 15(11), 2614–2628. https://doi.org/10.1049/ipr2.12250
Mendeley helps you to discover research relevant for your work.