scholarly journals Residual Spatial and Channel Attention Networks for Single Image Dehazing

Sensors ◽  
2021 ◽  
Vol 21 (23) ◽  
pp. 7922
Author(s):  
Xin Jiang ◽  
Chunlei Zhao ◽  
Ming Zhu ◽  
Zhicheng Hao ◽  
Wen Gao

Single image dehazing is a highly challenging ill-posed problem. Existing methods including both prior-based and learning-based heavily rely on the conceptual simplified atmospheric scattering model by estimating the so-called medium transmission map and atmospheric light. However, the formation of haze in the real world is much more complicated and inaccurate estimations further degrade the dehazing performance with color distortion, artifacts and insufficient haze removal. Moreover, most dehazing networks treat spatial-wise and channel-wise features equally, but haze is practically unevenly distributed across an image, thus regions with different haze concentrations require different attentions. To solve these problems, we propose an end-to-end trainable densely connected residual spatial and channel attention network based on the conditional generative adversarial framework to directly restore a haze-free image from an input hazy image, without explicitly estimation of any atmospheric scattering parameters. Specifically, a novel residual attention module is proposed by combining spatial attention and channel attention mechanism, which could adaptively recalibrate spatial-wise and channel-wise feature weights by considering interdependencies among spatial and channel information. Such a mechanism allows the network to concentrate on more useful pixels and channels. Meanwhile, the dense network can maximize the information flow along features from different levels to encourage feature reuse and strengthen feature propagation. In addition, the network is trained with a multi-loss function, in which contrastive loss and registration loss are novel refined to restore sharper structures and ensure better visual quality. Experimental results demonstrate that the proposed method achieves the state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.

Complexity ◽  
2021 ◽  
Vol 2021 ◽  
pp. 1-13
Author(s):  
Suting Chen ◽  
Wenhao Fan ◽  
Shaw Peter ◽  
Chuang Zhang ◽  
Kui Chen ◽  
...  

Inspired by the application of CycleGAN networks to the image style conversion problem Zhu et al. (2017), this paper proposes an end-to-end network, DefogNet, for solving the single-image dehazing problem, treating the image dehazing problem as a style conversion problem from a fogged image to a nonfogged image, without the need to estimate a priori information from an atmospheric scattering model. DefogNet improves on CycleGAN by adding a cross-layer connection structure in the generator to enhance the network’s multiscale feature extraction capability. The loss function was redesigned to add detail perception loss and color perception loss to improve the quality of texture information recovery and produce better fog-free images. In this paper, the novel Defog-SN algorithm is presented. This algorithm adds a spectral normalization layer to the discriminator’s convolution layer to make the discriminant network conform to a 1-Lipschitz continuum and further improve the model’s stability. In this study, the experimental process is completed based on the O-HAZE, I-HAZE, and RESIDE datasets. The dehazing results show that the method outperforms traditional methods in terms of PSNR and SSIM on synthetic datasets and Avegrad and Entropy on naturalistic images.


2020 ◽  
Vol 2020 (1) ◽  
pp. 74-77
Author(s):  
Simone Bianco ◽  
Luigi Celona ◽  
Flavio Piccoli

In this work we propose a method for single image dehazing that exploits a physical model to recover the haze-free image by estimating the atmospheric scattering parameters. Cycle consistency is used to further improve the reconstruction quality of local structures and objects in the scene as well. Experimental results on four real and synthetic hazy image datasets show the effectiveness of the proposed method in terms of two commonly used full-reference image quality metrics.


2020 ◽  
Vol 34 (07) ◽  
pp. 10729-10736 ◽  
Author(s):  
Yu Dong ◽  
Yihao Liu ◽  
He Zhang ◽  
Shifeng Chen ◽  
Yu Qiao

Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of dehazing, resulting in artifacts, color distortion and insufficient haze removal. To address this, we propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed Fusion-discriminator which takes frequency information as additional priors, our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts. Moreover, we synthesize a large-scale training dataset including various indoor and outdoor hazy images to boost the performance and we reveal that for learning-based dehazing methods, the performance is strictly influenced by the training data. Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Dong Nan ◽  
Du-yan Bi ◽  
Chang Liu ◽  
Shi-ping Ma ◽  
Lin-yuan He

The single image dehazing algorithms in existence can only satisfy the demand for dehazing efficiency, not for denoising. In order to solve the problem, a Bayesian framework for single image dehazing considering noise is proposed. Firstly, the Bayesian framework is transformed to meet the dehazing algorithm. Then, the probability density function of the improved atmospheric scattering model is estimated by using the statistical prior and objective assumption of degraded image. Finally, the reflectance image is achieved by an iterative approach with feedback to reach the balance between dehazing and denoising. Experimental results demonstrate that the proposed method can remove haze and noise simultaneously and effectively.


2018 ◽  
Vol 7 (02) ◽  
pp. 23578-23584
Author(s):  
Miss. Anjana Navale ◽  
Prof. Namdev Sawant ◽  
Prof. Umaji Bagal

Single image haze removal has been a challenging problem due to its ill-posed nature. In this paper, we have used a simple but powerful color attenuation prior for haze removal from a single input hazy image. By creating a linear model for modeling the scene depth of the hazy image under this novel prior and learning the parameters of the model with a supervised learning method, the depth information can be well recovered. With the depth map of the hazy image, we can easily estimate the transmission and restore the scene radiance via the atmospheric scattering model, and thus effectively remove the haze from a single image. Experimental results show that the proposed approach outperforms state-of-the-art haze removal algorithms in terms of both efficiency and the dehazing effect.


2017 ◽  
Vol 2017 ◽  
pp. 1-17 ◽  
Author(s):  
Zhenfei Gu ◽  
Mingye Ju ◽  
Dengyin Zhang

Outdoor images captured in bad weather are prone to yield poor visibility, which is a fatal problem for most computer vision applications. The majority of existing dehazing methods rely on an atmospheric scattering model and therefore share a common limitation; that is, the model is only valid when the atmosphere is homogeneous. In this paper, we propose an improved atmospheric scattering model to overcome this inherent limitation. By adopting the proposed model, a corresponding dehazing method is also presented. In this method, we first create a haze density distribution map of a hazy image, which enables us to segment the hazy image into scenes according to the haze density similarity. Then, in order to improve the atmospheric light estimation accuracy, we define an effective weight assignment function to locate a candidate scene based on the scene segmentation results and therefore avoid most potential errors. Next, we propose a simple but powerful prior named the average saturation prior (ASP), which is a statistic of extensive high-definition outdoor images. Using this prior combined with the improved atmospheric scattering model, we can directly estimate the scene atmospheric scattering coefficient and restore the scene albedo. The experimental results verify that our model is physically valid, and the proposed method outperforms several state-of-the-art single image dehazing methods in terms of both robustness and effectiveness.


Author(s):  
Hongyuan Zhu ◽  
Xi Peng ◽  
Vijay Chandrasekhar ◽  
Liyuan Li ◽  
Joo-Hwee Lim

Single image dehazing has been a classic topic in computer vision for years. Motivated by the atmospheric scattering model, the key to satisfactory single image dehazing relies on an estimation of two physical parameters, i.e., the global atmospheric light and the transmission coefficient. Most existing methods employ a two-step pipeline to estimate these two parameters with heuristics which accumulate errors and compromise dehazing quality. Inspired by differentiable programming, we re-formulate the atmospheric scattering model into a novel generative adversarial network (DehazeGAN). Such a reformulation and adversarial learning allow the two parameters to be learned simultaneously and automatically from data by optimizing the final dehazing performance so that clean images with faithful color and structures are directly produced. Moreover, our reformulation also greatly improves the GAN’s interpretability and quality for single image dehazing. To the best of our knowledge, our method is one of the first works to explore the connection among generative adversarial models, image dehazing, and differentiable programming, which advance the theories and application of these areas. Extensive experiments on synthetic and realistic data show that our method outperforms state-of-the-art methods in terms of PSNR, SSIM, and subjective visual quality.


The quality of image captured in presence of fog and haze is degraded due to atmospheric scattering. In order to restore such images, several dehazing algorithms have been proposed. These algorithms sometimes, results in either a contrast distorted dehazed image or a dehazed image that has influence of dense haze. In order to solve this problem, dynamic facsimile dehaze system built on minimum white balance optimization is proposed. This paper proposed a system that integrates some famous single image dehazing algorithms and enhance their outputs using histograms and adaptive histograms; then adaptively select the output with minimum white balance distortion in order to get the optimum output. Experimental results demonstrated that the presented system can attain better dehazing effect and further improves universality of dehazing methods. Also proposed system improves luminance and contrast of dehazed images to a certain extent.


Sign in / Sign up

Export Citation Format

Share Document