scholarly journals FD-GAN: Generative Adversarial Networks with Fusion-Discriminator for Single Image Dehazing

2020 ◽  
Vol 34 (07) ◽  
pp. 10729-10736 ◽  
Author(s):  
Yu Dong ◽  
Yihao Liu ◽  
He Zhang ◽  
Shifeng Chen ◽  
Yu Qiao

Recently, convolutional neural networks (CNNs) have achieved great improvements in single image dehazing and attained much attention in research. Most existing learning-based dehazing methods are not fully end-to-end, which still follow the traditional dehazing procedure: first estimate the medium transmission and the atmospheric light, then recover the haze-free image based on the atmospheric scattering model. However, in practice, due to lack of priors and constraints, it is hard to precisely estimate these intermediate parameters. Inaccurate estimation further degrades the performance of dehazing, resulting in artifacts, color distortion and insufficient haze removal. To address this, we propose a fully end-to-end Generative Adversarial Networks with Fusion-discriminator (FD-GAN) for image dehazing. With the proposed Fusion-discriminator which takes frequency information as additional priors, our model can generator more natural and realistic dehazed images with less color distortion and fewer artifacts. Moreover, we synthesize a large-scale training dataset including various indoor and outdoor hazy images to boost the performance and we reveal that for learning-based dehazing methods, the performance is strictly influenced by the training data. Experiments have shown that our method reaches state-of-the-art performance on both public synthetic datasets and real-world images with more visually pleasing dehazed results.

2020 ◽  
Vol 2020 (1) ◽  
pp. 74-77
Author(s):  
Simone Bianco ◽  
Luigi Celona ◽  
Flavio Piccoli

In this work we propose a method for single image dehazing that exploits a physical model to recover the haze-free image by estimating the atmospheric scattering parameters. Cycle consistency is used to further improve the reconstruction quality of local structures and objects in the scene as well. Experimental results on four real and synthetic hazy image datasets show the effectiveness of the proposed method in terms of two commonly used full-reference image quality metrics.


Author(s):  
Lucas Teixeira Goncalves ◽  
Joel Felipe de Oliveira Gaya ◽  
Paulo Jorge Lilles Drews Junior ◽  
Silvia Silva da Costa Botelho

Author(s):  
Zhenjian Yang ◽  
Jiamei Shang ◽  
Zhongwei Zhang ◽  
Yan Zhang ◽  
Shudong Liu

Traditional image dehazing algorithms based on prior knowledge and deep learning rely on the atmospheric scattering model and are easy to cause color distortion and incomplete dehazing. To solve these problems, an end-to-end image dehazing algorithm based on residual attention mechanism is proposed in this paper. The network includes four modules: encoder, multi-scale feature extraction, feature fusion and decoder. The encoder module encodes the input haze image into feature map, which is convenient for subsequent feature extraction and reduces memory consumption; the multi-scale feature extraction module includes residual smoothed dilated convolution module, residual block and efficient channel attention, which can expand the receptive field and extract different scale features by filtering and weighting; the feature fusion module with efficient channel attention adjusts the channel weight dynamically, acquires rich context information and suppresses redundant information so as to enhance the ability to extract haze density image of the network; finally, the encoder module maps the fused feature nonlinearly to obtain the haze density image and then restores the haze free image. The qualitative and quantitative tests based on SOTS test set and natural haze images show good objective and subjective evaluation results. This algorithm improves the problems of color distortion and incomplete dehazing effectively.


2020 ◽  
Vol 8 (2) ◽  
pp. 185-194
Author(s):  
Xiaochun Wang ◽  
Xiangdong Sun ◽  
Ruixia Song

AbstractSingle image dehazing algorithm based on the dark channel prior may cause block effect and color distortion. To improve these limitations, this paper proposes a single image dehazing algorithm based on the V-transform and the dark channel prior, in which a hazy RGB image is converted into the HSI color space, and each component H, I and S is processed separately. The hue component H remains unchanged, the saturation component S is stretched after being denoised by a median filter. In the procession of intensity component, a quad-tree algorithm is presented to estimate the atmospheric light, the dark channel prior and the V-transform are used to estimate the transmission map. To reduce the computational complexity, the intensity component I is decomposed by the V-transform first, coarse transmission map is then estimated by applying the dark channel prior on the low frequency reconstruction image, and the guided filter is finally employed to refine the coarse transmission map. For images with sky regions, the haze removal effectiveness can be greatly improved by just increasing the minimum value of the transmission map. The proposed algorithm has low time complexity and performs well on a wide variety of images. The recovered images have more nature color and less color distortion compared with some state-of-the-art methods.


2014 ◽  
Vol 2014 ◽  
pp. 1-13 ◽  
Author(s):  
Dong Nan ◽  
Du-yan Bi ◽  
Chang Liu ◽  
Shi-ping Ma ◽  
Lin-yuan He

The single image dehazing algorithms in existence can only satisfy the demand for dehazing efficiency, not for denoising. In order to solve the problem, a Bayesian framework for single image dehazing considering noise is proposed. Firstly, the Bayesian framework is transformed to meet the dehazing algorithm. Then, the probability density function of the improved atmospheric scattering model is estimated by using the statistical prior and objective assumption of degraded image. Finally, the reflectance image is achieved by an iterative approach with feedback to reach the balance between dehazing and denoising. Experimental results demonstrate that the proposed method can remove haze and noise simultaneously and effectively.


2021 ◽  
Vol 2021 (1) ◽  
pp. 16-20
Author(s):  
Apostolia Tsirikoglou ◽  
Marcus Gladh ◽  
Daniel Sahlin ◽  
Gabriel Eilertsen ◽  
Jonas Unger

This paper presents an evaluation of how data augmentation and inter-class transformations can be used to synthesize training data in low-data scenarios for single-image weather classification. In such scenarios, augmentations is a critical component, but there is a limit to how much improvements can be gained using classical augmentation strategies. Generative adversarial networks (GAN) have been demonstrated to generate impressive results, and have also been successful as a tool for data augmentation, but mostly for images of limited diversity, such as in medical applications. We investigate the possibilities in using generative augmentations for balancing a small weather classification dataset, where one class has a reduced number of images. We compare intra-class augmentations by means of classical transformations as well as noise-to-image GANs, to interclass augmentations where images from another class are transformed to the underrepresented class. The results show that it is possible to take advantage of GANs for inter-class augmentations to balance a small dataset for weather classification. This opens up for future work on GAN-based augmentations in scenarios where data is both diverse and scarce.


Sign in / Sign up

Export Citation Format

Share Document