scholarly journals Single Image Defogging Algorithm Based on Conditional Generative Adversarial Network

2020 ◽  
Vol 2020 ◽  
pp. 1-8
Author(s):  
Rui-Qiang Ma ◽  
Xing-Run Shen ◽  
Shan-Jun Zhang

Outside the house, images taken using a phone in foggy weather are not suitable for automation due to low contrast. Usually, it is revised in the dark channel prior (DCP) method (K. He et al. 2009), but the non-sky bright area exists due to mistakes in the removal. In this paper, we propose an algorithm, defog-based generative adversarial network (DbGAN). We use generative adversarial network (GAN) for training and embed target map (TM) in the anti-network generator, only the part of bright area layer of image, in local attention model image training and testing in deep learning, and the effective processing of the wrong removal part is achieved, thus better restoring the defog image. Then, the DCP method obtains a good defog visual effect, and the evaluation index peak signal-to-noise ratio (PSNR) is used to make a judgment; the simulation result is consistent with the visual effect. We proved the DbGAN is a practical import of target map in the GAN. The algorithm is used defogging in the highlighted area is well realized, which makes up for the shortcomings of the DCP algorithm.

Sensors ◽  
2020 ◽  
Vol 20 (21) ◽  
pp. 6000
Author(s):  
Jiahao Chen ◽  
Chong Wu ◽  
Hu Chen ◽  
Peng Cheng

In this paper, we propose a new unsupervised attention-based cycle generative adversarial network to solve the problem of single-image dehazing. The proposed method adds an attention mechanism that can dehaze different areas on the basis of the previous generative adversarial network (GAN) dehazing method. This mechanism not only avoids the need to change the haze-free area due to the overall style migration of traditional GANs, but also pays attention to the different degrees of haze concentrations that need to be changed, while retaining the details of the original image. To more accurately and quickly label the concentrations and areas of haze, we innovatively use training-enhanced dark channels as attention maps, combining the advantages of prior algorithms and deep learning. The proposed method does not require paired datasets, and it can adequately generate high-resolution images. Experiments demonstrate that our algorithm is superior to previous algorithms in various scenarios. The proposed algorithm can effectively process very hazy images, misty images, and haze-free images, which is of great significance for dehazing in complex scenes.


2021 ◽  
Vol 12 (6) ◽  
pp. 1-20
Author(s):  
Fayaz Ali Dharejo ◽  
Farah Deeba ◽  
Yuanchun Zhou ◽  
Bhagwan Das ◽  
Munsif Ali Jatoi ◽  
...  

Single Image Super-resolution (SISR) produces high-resolution images with fine spatial resolutions from a remotely sensed image with low spatial resolution. Recently, deep learning and generative adversarial networks (GANs) have made breakthroughs for the challenging task of single image super-resolution (SISR) . However, the generated image still suffers from undesirable artifacts such as the absence of texture-feature representation and high-frequency information. We propose a frequency domain-based spatio-temporal remote sensing single image super-resolution technique to reconstruct the HR image combined with generative adversarial networks (GANs) on various frequency bands (TWIST-GAN). We have introduced a new method incorporating Wavelet Transform (WT) characteristics and transferred generative adversarial network. The LR image has been split into various frequency bands by using the WT, whereas the transfer generative adversarial network predicts high-frequency components via a proposed architecture. Finally, the inverse transfer of wavelets produces a reconstructed image with super-resolution. The model is first trained on an external DIV2 K dataset and validated with the UC Merced Landsat remote sensing dataset and Set14 with each image size of 256 × 256. Following that, transferred GANs are used to process spatio-temporal remote sensing images in order to minimize computation cost differences and improve texture information. The findings are compared qualitatively and qualitatively with the current state-of-art approaches. In addition, we saved about 43% of the GPU memory during training and accelerated the execution of our simplified version by eliminating batch normalization layers.


Sign in / Sign up

Export Citation Format

Share Document