PHC-GAN: Physical Constraint Generative Adversarial Network for Single Image Dehazing

Author(s):  
Gang Long ◽  
Wen Lu ◽  
Lin Zha ◽  
Hongyi Zhang
IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 173485-173498 ◽  
Author(s):  
Wenhui Wang ◽  
Anna Wang ◽  
Qing Ai ◽  
Chen Liu ◽  
Jinglu Liu

2021 ◽  
Vol 423 ◽  
pp. 620-638
Author(s):  
Yan Zhao Su ◽  
Zhi Gao Cui ◽  
Chuan He ◽  
Ai Hua Li ◽  
Tao Wang ◽  
...  

Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Vijay Chandrasekhar ◽  
Liyuan Li ◽  
Joo-Hwee Lim

Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we re-formulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN’s interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.


Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6000
Author(s):  
Jiahao Chen ◽  
Chong Wu ◽  
Hu Chen ◽  
Peng Cheng

In this paper, we propose a new unsupervised attention-based cycle generative adversarial network to solve the problem of single-image dehazing. The proposed method adds an attention mechanism that can dehaze different areas on the basis of the previous generative adversarial network (GAN) dehazing method. This mechanism not only avoids the need to change the haze-free area due to the overall style migration of traditional GANs, but also pays attention to the different degrees of haze concentrations that need to be changed, while retaining the details of the original image. To more accurately and quickly label the concentrations and areas of haze, we innovatively use training-enhanced dark channels as attention maps, combining the advantages of prior algorithms and deep learning. The proposed method does not require paired datasets, and it can adequately generate high-resolution images. Experiments demonstrate that our algorithm is superior to previous algorithms in various scenarios. The proposed algorithm can effectively process very hazy images, misty images, and haze-free images, which is of great significance for dehazing in complex scenes.


Sign in / Sign up

Export Citation Format

Share Document