scholarly journals Underwater image enhancement via efficient generative adversarial network

2021 ◽  
Vol 51 (4) ◽  
Author(s):  
Xin Qian ◽  
Peng Ge

Underwater image enhancement has been receiving much attention due to its significance in facilitating various marine explorations. Inspired by the generative adversarial network (GAN) and residual network (ResNet) in many vision tasks, we propose a simplified designed ResNet model based on GAN called efficient GAN (EGAN) for underwater image enhancement. In particular, for the generator of EGAN we design a new pair of convolutional kernel size for the residual block in the ResNet. Secondly, we abandon batch normalization (BN) after every convolution layer for faster training and less artifacts. Finally, a smooth loss function is introduced for halo-effect alleviation. Extensive qualitative and quantitative experiments show that our methods accomplish considerable improvements compared to the state-of-the-art methods.

2021 ◽  
pp. 1-13
Author(s):  
Long Hou ◽  
Long Yu ◽  
Shengwei Tian ◽  
Yanhan Zhang

Underwater image enhancement has always been a hot spot in underwater vision research. However, due to complicated underwater environment, a lot of problems such as the color distortion and low brightness of underwater raw images are very likely to occur. In response to the above situation, we proposed a generative adversarial network that integrated multiple attention to enhance underwater images. In the generator, we introduced multi-layer dense connections and CSAM modules, of which the former could capture more detailed features and make use of previous features, while the latter could improve the utilization of the feature map. Meanwhile, we improved the enhancement effect of the generated image by combining VGG19 content loss function and SmoothL1 loss function. Finally, we verified the effectiveness of the proposed model through qualitative and quantitative experiments, and compared the results with the performance of several latest models. The results show that the methods proposed in this paper are superior to the existing methods.


2021 ◽  
Vol 9 (7) ◽  
pp. 691
Author(s):  
Kai Hu ◽  
Yanwen Zhang ◽  
Chenghang Weng ◽  
Pengsheng Wang ◽  
Zhiliang Deng ◽  
...  

When underwater vehicles work, underwater images are often absorbed by light and scattered and diffused by floating objects, which leads to the degradation of underwater images. The generative adversarial network (GAN) is widely used in underwater image enhancement tasks because it can complete image-style conversions with high efficiency and high quality. Although the GAN converts low-quality underwater images into high-quality underwater images (truth images), the dataset of truth images also affects high-quality underwater images. However, an underwater truth image lacks underwater image enhancement, which leads to a poor effect of the generated image. Thus, this paper proposes to add the natural image quality evaluation (NIQE) index to the GAN to provide generated images with higher contrast and make them more in line with the perception of the human eye, and at the same time, grant generated images a better effect than the truth images set by the existing dataset. In this paper, several groups of experiments are compared, and through the subjective evaluation and objective evaluation indicators, it is verified that the enhanced image of this algorithm is better than the truth image set by the existing dataset.


Author(s):  
Han Xu ◽  
Pengwei Liang ◽  
Wei Yu ◽  
Junjun Jiang ◽  
Jiayi Ma

In this paper, we propose a new end-to-end model, called dual-discriminator conditional generative adversarial network (DDcGAN), for fusing infrared and visible images of different resolutions. Unlike the pixel-level methods and existing deep learning-based methods, the fusion task is accomplished through the adversarial process between a generator and two discriminators, in addition to the specially designed content loss. The generator is trained to generate real-like fused images to fool discriminators. The two discriminators are trained to calculate the JS divergence between the probability distribution of downsampled fused images and infrared images, and the JS divergence between the probability distribution of gradients of fused images and gradients of visible images, respectively. Thus, the fused images can compensate for the features that are not constrained by the single content loss. Consequently, the prominence of thermal targets in the infrared image and the texture details in the visible image can be preserved or even enhanced in the fused image simultaneously. Moreover, by constraining and distinguishing between the downsampled fused image and the low-resolution infrared image, DDcGAN can be preferably applied to the fusion of different resolution images. Qualitative and quantitative experiments on publicly available datasets demonstrate the superiority of our method over the state-of-the-art.


2020 ◽  
Vol 57 (14) ◽  
pp. 141002
Author(s):  
晋玮佩 Jin Weipei ◽  
郭继昌 Guo Jichang ◽  
祁清 Qi Qing

2021 ◽  
Vol 30 (01) ◽  
Author(s):  
Jin-Tao Yu ◽  
Rui-Sheng Jia ◽  
Li Gao ◽  
Ruo-Nan Yin ◽  
Hong-Mei Sun ◽  
...  

Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 1
Author(s):  
Rong Du ◽  
Weiwei Li ◽  
Shudong Chen ◽  
Congying Li ◽  
Yong Zhang

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.


Sign in / Sign up

Export Citation Format

Share Document