Multi-scale retinex-based adaptive gray-scale transformation method for underwater image enhancement

Author(s):  
Jingchun Zhou ◽  
Jian Yao ◽  
Weishi Zhang ◽  
Dehuan Zhang
2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 595
Author(s):  
Huajun Song ◽  
Rui Wang

Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 128973-128990
Author(s):  
Linfeng Bai ◽  
Weidong Zhang ◽  
Xipeng Pan ◽  
Chenping Zhao

2017 ◽  
Vol 245 ◽  
pp. 1-9 ◽  
Author(s):  
Shu Zhang ◽  
Ting Wang ◽  
Junyu Dong ◽  
Hui Yu

Author(s):  
M. Sudhakara ◽  
M. Janaki Meena

<span id="docs-internal-guid-54b35aa6-7fff-0992-ed4c-aca4d05cfcfa"><span>Underwater image enhancement (UIE) is an imperative computer vision activity with many applications and different strategies proposed in recent years. Underwater images are firmly low in quality by a mixture of noise, wavelength dependency, and light attenuation. This paper depicts an effective strategy to improve the quality of degraded underwater images. Existing methods for dehazing in the literature considering dark channel prior utilize two separate phases for evaluating the transmission map (i.e., transmission estimation and transmission refinement). Accurate restoration is not possible with these methods and takes more computational time. A proposed three-step method is an imaging approach that does not need particular hardware or underwater conditions. First, we utilize the multi-layer perceptron (MLP) to comprehensively evaluate transmission maps by base channel, followed by contrast enhancement.  Furthermore, a gamma-adjusted version of the MLP recovered image is derived. Finally, the multi-scale fusion method was applied to two attained images. The standardized weight is computed for the two images with three different weights in the fusion process. The quantitative results show that significantly our approach gives the better result with the difference of 0.536, 2.185, and 1.272 for PCQI, UCIQE, and UIQM metrics, respectively, on a single underwater image benchmark dataset. The qualitative results also give better results compared with the state-of-the-art techniques.</span></span>


Author(s):  
Zetian Mi ◽  
Zheng Liang ◽  
Yafei Wang ◽  
Xianping Fu ◽  
Zhengyu Chen

Sign in / Sign up

Export Citation Format

Share Document