Underwater image enhancement using stacked generative adversarial networks

20Citations
Citations of this article
11Readers
Mendeley users who have this article in their library.
Get full text

Abstract

This paper addresses the problem of jointly haze detection and color correction from a single underwater image. We present a framework based on stacked conditional Generative adversarial networks (GAN) to learn the mapping between the underwater images and the air images in an end-to-end fashion. The proposed architecture can be divided into two components, i.e., haze detection sub-network and color correction sub-network, each with a generator and a discriminator. Specifically, a underwater image is fed into the first generator to produce a hazing detection mask. Then, the underwater image along with the predicted mask go through the second generator to correct the color of the underwater image. Experimental results show the advantages of our proposed method over several state-of-the-art methods on publicly available synthetic and real underwater datasets.

Cite

CITATION STYLE

APA

Ye, X., Xu, H., Ji, X., & Xu, R. (2018). Underwater image enhancement using stacked generative adversarial networks. In Lecture Notes in Computer Science (including subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (Vol. 11166 LNCS, pp. 514–524). Springer Verlag. https://doi.org/10.1007/978-3-030-00764-5_47

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free