Underwater image enhancement using a mixed generative adversarial network

3Citations
Citations of this article
15Readers
Mendeley users who have this article in their library.

Abstract

Underwater images intuitively reflect the underwater environment information. However, underwater images have defects such as colour distortion and low contrast, which seriously affect the processing of complex underwater visual tasks. Here, a novel mixed model called mixed underwater image generative adversarial network (MUGAN) is presented, consisting of a generator and corresponding discriminator. The generator is built following the U-shaped architecture, where a mixed block of convolution and self-attention is developed. It effectively exploits the complementarity between the two paradigms. In addition, a dual discriminator is employed to induce the generator to produce realistic images at both the global semantic and local detail levels, which is only discriminates based on the patch-level information. Meanwhile, a multi-term loss function is formulated to supervise adversarial training by evaluating the perceptual quality of an image based on its global content, local texture and illumination smoothness. To validate the proposed approach, extensive experiments are conducted on the public underwater datasets. MUGAN achieves promising performance in terms of colour, contrast and naturalness, showing a significant improvement over other competitive models in visual quality and quantitative metrics.

Cite

CITATION STYLE

APA

Mu, D., Li, H., Liu, H., Dong, L., & Zhang, G. (2023). Underwater image enhancement using a mixed generative adversarial network. IET Image Processing, 17(4), 1149–1160. https://doi.org/10.1049/ipr2.12702

Register to see more suggestions

Mendeley helps you to discover research relevant for your work.

Already have an account?

Save time finding and organizing research with Mendeley

Sign up for free