Abstract
Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we reformulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN's interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.
Cite
CITATION STYLE
Zhu, H., Peng, X., Chandrasekhar, V., Li, L., & Lim, J. H. (2018). DeHazegan: When image dehazing meets differential programming. In IJCAI International Joint Conference on Artificial Intelligence (Vol. 2018-July, pp. 1234–1240). International Joint Conferences on Artificial Intelligence. https://doi.org/10.24963/ijcai.2018/172
Register to see more suggestions
Mendeley helps you to discover research relevant for your work.