2019 ◽  
Vol 2019 ◽  
pp. 1-14
Author(s):  
Yingkun Hou ◽  
Xiaobo Qu ◽  
Guanghai Liu ◽  
Seong-Whan Lee ◽  
Dinggang Shen

In this paper, we develop a novel linear singularity representation method using spatial K-neighbor block-extraction and Haar transform (BEH). Block-extraction provides a group of image blocks with similar (generally smooth) backgrounds but different image edge locations. An interblock Haar transform is then used to represent these differences, thus achieving a linear singularity representation. Next, we magnify the weak detailed coefficients of BEH to allow for image enhancement. Experimental results show that the proposed method achieves better image enhancement, compared to block-matching and 3D filtering (BM3D), nonsubsampled contourlet transform (NSCT), and guided image filtering.


Author(s):  
Qi Mu ◽  
Xinyue Wang ◽  
Yanyan Wei ◽  
Zhanli Li

AbstractIn the state of the art, grayscale image enhancement algorithms are typically adopted for enhancement of RGB color images captured with low or non-uniform illumination. As these methods are applied to each RGB channel independently, imbalanced inter-channel enhancements (color distortion) can often be observed in the resulting images. On the other hand, images with non-uniform illumination enhanced by the retinex algorithm are prone to artifacts such as local blurring, halos, and over-enhancement. To address these problems, an improved RGB color image enhancement method is proposed for images captured under non-uniform illumination or in poor visibility, based on weighted guided image filtering (WGIF). Unlike the conventional retinex algorithm and its variants, WGIF uses a surround function instead of a Gaussian filter to estimate the illumination component; it avoids local blurring and halo artifacts due to its anisotropy and adaptive local regularization. To limit color distortion, RGB images are first converted to HSI (hue, saturation, intensity) color space, where only the intensity channel is enhanced, before being converted back to RGB space by a linear color restoration algorithm. Experimental results show that the proposed method is effective for both RGB color and grayscale images captured under low exposure and non-uniform illumination, with better visual quality and objective evaluation scores than from comparator algorithms. It is also efficient due to use of a linear color restoration algorithm.


IEEE Access ◽  
2020 ◽  
Vol 8 ◽  
pp. 196690-196699
Author(s):  
Yaqiao Cheng ◽  
Zhenhong Jia ◽  
Huicheng Lai ◽  
Jie Yang ◽  
Nikola K. Kasabov

2021 ◽  
Vol 9 (2) ◽  
pp. 225
Author(s):  
Farong Gao ◽  
Kai Wang ◽  
Zhangyi Yang ◽  
Yejian Wang ◽  
Qizhong Zhang

In this study, an underwater image enhancement method based on local contrast correction (LCC) and multi-scale fusion is proposed to resolve low contrast and color distortion of underwater images. First, the original image is compensated using the red channel, and the compensated image is processed with a white balance. Second, LCC and image sharpening are carried out to generate two different image versions. Finally, the local contrast corrected images are fused with sharpened images by the multi-scale fusion method. The results show that the proposed method can be applied to water degradation images in different environments without resorting to an image formation model. It can effectively solve color distortion, low contrast, and unobvious details of underwater images.


Mathematics ◽  
2021 ◽  
Vol 9 (6) ◽  
pp. 595
Author(s):  
Huajun Song ◽  
Rui Wang

Aimed at the two problems of color deviation and poor visibility of the underwater image, this paper proposes an underwater image enhancement method based on the multi-scale fusion and global stretching of dual-model (MFGS), which does not rely on the underwater optical imaging model. The proposed method consists of three stages: Compared with other color correction algorithms, white-balancing can effectively eliminate the undesirable color deviation caused by medium attenuation, so it is selected to correct the color deviation in the first stage. Then, aimed at the problem of the poor performance of the saliency weight map in the traditional fusion processing, this paper proposed an updated strategy of saliency weight coefficient combining contrast and spatial cues to achieve high-quality fusion. Finally, by analyzing the characteristics of the results of the above steps, it is found that the brightness and clarity need to be further improved. The global stretching of the full channel in the red, green, blue (RGB) model is applied to enhance the color contrast, and the selective stretching of the L channel in the Commission International Eclairage-Lab (CIE-Lab) model is implemented to achieve a better de-hazing effect. Quantitative and qualitative assessments on the underwater image enhancement benchmark dataset (UIEBD) show that the enhanced images of the proposed approach achieve significant and sufficient improvements in color and visibility.


Sign in / Sign up

Export Citation Format

Share Document