TBEFN: A two-branch exposure-fusion network for low-light image enhancement

2020 ◽  
pp. 1-1
Author(s):  
Kun Lu ◽  
Lihong Zhang
2020 ◽  
Vol 34 (07) ◽  
pp. 13106-13113 ◽  
Author(s):  
Minfeng Zhu ◽  
Pingbo Pan ◽  
Wei Chen ◽  
Yi Yang

This work focuses on the extremely low-light image enhancement, which aims to improve image brightness and reveal hidden information in darken areas. Recently, image enhancement approaches have yielded impressive progress. However, existing methods still suffer from three main problems: (1) low-light images usually are high-contrast. Existing methods may fail to recover images details in extremely dark or bright areas; (2) current methods cannot precisely correct the color of low-light images; (3) when the object edges are unclear, the pixel-wise loss may treat pixels of different objects equally and produce blurry images. In this paper, we propose a two-stage method called Edge-Enhanced Multi-Exposure Fusion Network (EEMEFN) to enhance extremely low-light images. In the first stage, we employ a multi-exposure fusion module to address the high contrast and color bias issues. We synthesize a set of images with different exposure time from a single image and construct an accurate normal-light image by combining well-exposed areas under different illumination conditions. Thus, it can produce realistic initial images with correct color from extremely noisy and low-light images. Secondly, we introduce an edge enhancement module to refine the initial images with the help of the edge information. Therefore, our method can reconstruct high-quality images with sharp edges when minimizing the pixel-wise loss. Experiments on the See-in-the-Dark dataset indicate that our EEMEFN approach achieves state-of-the-art performance.


2021 ◽  
Author(s):  
Zhuqing Jiang ◽  
Haotian Li ◽  
Liangjie Liu ◽  
Aidong Men ◽  
Haiying Wang

2021 ◽  
Vol 11 (11) ◽  
pp. 5055
Author(s):  
Hong Liang ◽  
Ankang Yu ◽  
Mingwen Shao ◽  
Yuru Tian

Due to the characteristics of low signal-to-noise ratio and low contrast, low-light images will have problems such as color distortion, low visibility, and accompanying noise, which will cause the accuracy of the target detection problem to drop or even miss the detection target. However, recalibrating the dataset for this type of image will face problems such as increased cost or reduced model robustness. To solve this kind of problem, we propose a low-light image enhancement model based on deep learning. In this paper, the feature extraction is guided by the illumination map and noise map, and then the neural network is trained to predict the local affine model coefficients in the bilateral space. Through these methods, our network can effectively denoise and enhance images. We have conducted extensive experiments on the LOL datasets, and the results show that, compared with traditional image enhancement algorithms, the model is superior to traditional methods in image quality and speed.


Sign in / Sign up

Export Citation Format

Share Document