Deep Inverse Halftoning via Progressively Residual Learning

10Citations
Citations of this article
4Readers
Mendeley users who have this article in their library.
Get full text

Abstract

Inverse halftoning as a classic problem has been investigated in the last two decades, however, it is still a challenge to recover the continuous version with accurate details from halftone images. In this paper, we present a statistic learning based method to address it, leveraging Convolutional Neural Network (CNN) as a nonlinear mapping function. To exploit features as completely as possible, we propose a Progressively Residual Learning (PRL) network that synthesizes the global tone and subtle details from the halftone images in a progressive manner. Particularly, it contains two modules: Content Aggregation that removes the halftone patterns and reconstructs the continuous tone firstly, and Detail Enhancement that boosts the subtle structures incrementally via learning a residual image. Benefiting from this efficient architecture, the proposed network is superior to all the candidate networks employed in our experiments for inverse halftoning. Also, our approach outperforms the state of the art with a large margin.

Cite

CITATION STYLE

APA

Xia, M., & Wong, T. T. (2019). Deep Inverse Halftoning via Progressively Residual Learning. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11366 LNCS, pp. 523–539). Springer Verlag. https://doi.org/10.1007/978-3-030-20876-9_33

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free