Blind text images deblurring based on a generative adversarial network

4Citations
Citations of this article
7Readers
Mendeley users who have this article in their library.

Abstract

Recently, text images deblurring has achieved advanced development. Unlike previous methods based on handcrafted priors or assume specific kernel, the authors recognise the text deblurring problem as a semantic generation task, which can be achieved by a generative adversarial network. The structure is an essential property of text images; thus, they propose a structural loss function and a detailed loss function to regularise the recovery of text images. Furthermore, they learn from the coarse-to-fine strategy and present a multi-scale generator, which is utilised for sharpening the generated text images. The model has a robust capability of generating realistic latent images with photo-quality effect. Extensive experiments on the synthetic and real-world blurry images have shown that the proposed network is comparable to the state-of-the-art methods.

Cite

CITATION STYLE

APA

Qi, Q., & Guo, J. (2019). Blind text images deblurring based on a generative adversarial network. IET Image Processing, 13(14), 2850–2858. https://doi.org/10.1049/iet-ipr.2018.6697

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free