UMGAN: Underwater Image Enhancement Network for Unpaired Image-to-Image Translation

29Citations
Citations of this article
12Readers
Mendeley users who have this article in their library.

Abstract

Due to light absorption and scattering underwater images suffer from low contrast, color distortion, blurred details, and uneven illumination, which affect underwater vision tasks and research. Therefore, underwater image enhancement is of great significance in vision applications. In contrast to existing methods for specific underwater environments or reliance on paired datasets, this study proposes an underwater multiscene generative adversarial network (UMGAN) to enhance underwater images. The network implements unpaired image-to-image translation between the underwater turbid domain and the underwater clear domain. It has a great enhancement impact on several underwater image types. Feedback mechanisms and a noise reduction network are designed to optimize the generator and address the issue of noise and artifacts in GAN-produced images. Furthermore, a global–local discriminator is employed to improve the overall image while adaptively modifying the local region image effect. It resolves the issue of over- and underenhancement in local regions. The reliance on paired training data is eliminated through a cycle consistency network structure. UMGAN performs satisfactorily on various types of data when compared quantitatively and qualitatively to other state-of-the-art algorithms. It has strong robustness and can be applied to various enhancement tasks in different scenes.

Cite

CITATION STYLE

APA

Sun, B., Mei, Y., Yan, N., & Chen, Y. (2023). UMGAN: Underwater Image Enhancement Network for Unpaired Image-to-Image Translation. Journal of Marine Science and Engineering, 11(2). https://doi.org/10.3390/jmse11020447

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free