underwater image
Recently Published Documents


TOTAL DOCUMENTS

797
(FIVE YEARS 441)

H-INDEX

30
(FIVE YEARS 10)

2022 ◽  
Vol 149 ◽  
pp. 106785
Author(s):  
Jiajie Wang ◽  
Minjie Wan ◽  
Guohua Gu ◽  
Weixian Qian ◽  
Kan Ren ◽  
...  

Electronics ◽  
2022 ◽  
Vol 11 (1) ◽  
pp. 150
Author(s):  
Meicheng Zheng ◽  
Weilin Luo

Due to refraction, absorption, and scattering of light by suspended particles in water, underwater images are characterized by low contrast, blurred details, and color distortion. In this paper, a fusion algorithm to restore and enhance underwater images is proposed. It consists of a color restoration module, an end-to-end defogging module and a brightness equalization module. In the color restoration module, a color balance algorithm based on CIE Lab color model is proposed to alleviate the effect of color deviation in underwater images. In the end-to-end defogging module, one end is the input image and the other end is the output image. A CNN network is proposed to connect these two ends and to improve the contrast of the underwater images. In the CNN network, a sub-network is used to reduce the depth of the network that needs to be designed to obtain the same features. Several depth separable convolutions are used to reduce the amount of calculation parameters required during network training. The basic attention module is introduced to highlight some important areas in the image. In order to improve the defogging network’s ability to extract overall information, a cross-layer connection and pooling pyramid module are added. In the brightness equalization module, a contrast limited adaptive histogram equalization method is used to coordinate the overall brightness. The proposed fusion algorithm for underwater image restoration and enhancement is verified by experiments and comparison with previous deep learning models and traditional methods. Comparison results show that the color correction and detail enhancement by the proposed method are superior.


Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 313
Author(s):  
Chin-Feng Lin ◽  
Cheng-Fong Wu ◽  
Ching-Lung Hsieh ◽  
Shun-Hsyung Chang ◽  
Ivan A. Parinov ◽  
...  

In this paper, a low-power underwater acoustic (UWA) image transceiver based on generalized frequency division multiplexing (GFDM) modulation for underwater communication is proposed. The proposed transceiver integrates a low-density parity-check code error protection scheme, adaptive 4-quadrature amplitude modulation (QAM) and 16-QAM strategies, GFDM modulation, and a power assignment mechanism in an UWA image communication environment. The transmission bit error rates (BERs), the peak signal-to-noise ratios (PSNRs) of the received underwater images, and the power-saving ratio (PSR) of the proposed transceiver obtained using 4-QAM and 16-QAM, with perfect channel estimation, and channel estimation errors (CEEs) of 5%, 10%, and 20% were simulated. The PSNR of the received underwater image is 44.46 dB when using 4-QAM with a CEE of 10%. In contrast, PSNR is 48.79 dB when using 16-QAM with a CEE of 10%. When BER is 10−4, the received UW images have high PSNR values and high resolutions, indicating that the proposed transceiver is suitable for underwater image sensor signal transmission.


Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 1
Author(s):  
Rong Du ◽  
Weiwei Li ◽  
Shudong Chen ◽  
Congying Li ◽  
Yong Zhang

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.


Water ◽  
2021 ◽  
Vol 13 (23) ◽  
pp. 3470
Author(s):  
Fayadh Alenezi ◽  
Ammar Armghan ◽  
Sachi Nandan Mohanty ◽  
Rutvij H. Jhaveri ◽  
Prayag Tiwari

A lack of adequate consideration of underwater image enhancement gives room for more research into the field. The global background light has not been adequately addressed amid the presence of backscattering. This paper presents a technique based on pixel differences between global and local patches in scene depth estimation. The pixel variance is based on green and red, green and blue, and red and blue channels besides the absolute mean intensity functions. The global background light is extracted based on a moving average of the impact of suspended light and the brightest pixels within the image color channels. We introduce the block-greedy algorithm in a novel Convolutional Neural Network (CNN) proposed to normalize different color channels’ attenuation ratios and select regions with the lowest variance. We address the discontinuity associated with underwater images by transforming both local and global pixel values. We minimize energy in the proposed CNN via a novel Markov random field to smooth edges and improve the final underwater image features. A comparison of the performance of the proposed technique against existing state-of-the-art algorithms using entropy, Underwater Color Image Quality Evaluation (UCIQE), Underwater Image Quality Measure (UIQM), Underwater Image Colorfulness Measure (UICM), and Underwater Image Sharpness Measure (UISM) indicate better performance of the proposed approach in terms of average and consistency. As it concerns to averagely, UICM has higher values in the technique than the reference methods, which explainsits higher color balance. The μ values of UCIQE, UISM, and UICM of the proposed method supersede those of the existing techniques. The proposed noted a percent improvement of 0.4%, 4.8%, 9.7%, 5.1% and 7.2% in entropy, UCIQE, UIQM, UICM and UISM respectively compared to the best existing techniques. Consequently, dehazed images have sharp, colorful, and clear features in most images when compared to those resulting from the existing state-of-the-art methods. Stable σ values explain the consistency in visual analysis in terms of sharpness of color and clarity of features in most of the proposed image results when compared with reference methods. Our own assessment shows that only weakness of the proposed technique is that it only applies to underwater images. Future research could seek to establish edge strengthening without color saturation enhancement.


Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 8160
Author(s):  
Meijing Gao ◽  
Yang Bai ◽  
Zhilong Li ◽  
Shiyu Li ◽  
Bozhi Zhang ◽  
...  

In recent years, jellyfish outbreaks have frequently occurred in offshore areas worldwide, posing a significant threat to the marine fishery, tourism, coastal industry, and personal safety. Effective monitoring of jellyfish is a vital method to solve the above problems. However, the optical detection method for jellyfish is still in the primary stage. Therefore, this paper studies a jellyfish detection method based on convolution neural network theory and digital image processing technology. This paper studies the underwater image preprocessing algorithm because the quality of underwater images directly affects the detection results. The results show that the image quality is better after applying the three algorithms namely prior defogging, adaptive histogram equalization, and multi-scale retinal enhancement, which is more conducive to detection. We establish a data set containing seven species of jellyfishes and fish. A total of 2141 images are included in the data set. The YOLOv3 algorithm is used to detect jellyfish, and its feature extraction network Darknet53 is optimized to ensure it is conducted in real-time. In addition, we introduce label smoothing and cosine annealing learning rate methods during the training process. The experimental results show that the improved algorithms improve the detection accuracy of jellyfish on the premise of ensuring the detection speed. This paper lays a foundation for the construction of an underwater jellyfish optical imaging real-time monitoring system.


Sign in / Sign up

Export Citation Format

Share Document