Multiple scattering model based single image dehazing

Author(s):  
Renjie He ◽  
Zhiyong Wang ◽  
Yangyu Fan ◽  
David Dagan Feng
2021 ◽  
Author(s):  
Shunmin An ◽  
Xixia Huang ◽  
Linling Wang ◽  
Zhangjing Zheng ◽  
Le Wang

2016 ◽  
Vol 45 (4) ◽  
pp. 410002 ◽  
Author(s):  
王睿 WANG Rui ◽  
李蕊 LI Rui ◽  
廉小亲 LIAN Xiao-qin

2017 ◽  
Vol 2017 ◽  
pp. 1-17 ◽  
Author(s):  
Zhenfei Gu ◽  
Mingye Ju ◽  
Dengyin Zhang

Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.


Sensors ◽  
2020 ◽  
Vol 20 (18) ◽  
pp. 5300
Author(s):  
Renjie He ◽  
Xintao Guo ◽  
Zhongke Shi

Single image dehazing is a difficult problem because of its ill-posed nature. Increasing attention has been paid recently as its high potential applications in many visual tasks. Although single image dehazing has made remarkable progress in recent years, they are mainly designed for haze removal in daytime. In nighttime, dehazing is more challenging where most daytime dehazing methods become invalid due to multiple scattering phenomena, and non-uniformly distributed dim ambient illumination. While a few approaches have been proposed for nighttime image dehazing, low ambient light is actually ignored. In this paper, we propose a novel unified nighttime hazy image enhancement framework to address the problems of both haze removal and illumination enhancement simultaneously. Specifically, both halo artifacts caused by multiple scattering and non-uniformly distributed ambient illumination existing in low-light hazy conditions are considered for the first time in our approach. More importantly, most current daytime dehazing methods can be effectively incorporated into nighttime dehazing task based on our framework. Firstly, we decompose the observed hazy image into a halo layer and a scene layer to remove the influence of multiple scattering. After that, we estimate the spatially varying ambient illumination based on the Retinex theory. We then employ the classic daytime dehazing methods to recover the scene radiance. Finally, we generate the dehazing result by combining the adjusted ambient illumination and the scene radiance. Compared with various daytime dehazing methods and the state-of-the-art nighttime dehazing methods, both quantitative and qualitative experimental results on both real-world and synthetic hazy image datasets demonstrate the superiority of our framework in terms of halo mitigation, visibility improvement and color preservation.


Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Vijay Chandrasekhar ◽  
Liyuan Li ◽  
Joo-Hwee Lim

Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we re-formulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN’s interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.


Sign in / Sign up

Export Citation Format

Share Document