scholarly journals LighterGAN: An Illumination Enhancement Method for Urban UAV Imagery

2021 ◽  
Vol 13 (7) ◽  
pp. 1371
Author(s):  
Junshu Wang ◽  
Yue Yang ◽  
Yuan Chen ◽  
Yuxing Han

In unmanned aerial vehicle based urban observation and monitoring, the performance of computer vision algorithms is inevitably limited by the low illumination and light pollution caused degradation, therefore, the application image enhancement is a considerable prerequisite for the performance of subsequent image processing algorithms. Therefore, we proposed a deep learning and generative adversarial network based model for UAV low illumination image enhancement, named LighterGAN. The design of LighterGAN refers to the CycleGAN model with two improvements—attention mechanism and semantic consistency loss—having been proposed to the original structure. Additionally, an unpaired dataset that was captured by urban UAV aerial photography has been used to train this unsupervised learning model. Furthermore, in order to explore the advantages of the improvements, both the performance in the illumination enhancement task and the generalization ability improvement of LighterGAN were proven in the comparative experiments combining subjective and objective evaluations. In the experiments with five cutting edge image enhancement algorithms, in the test set, LighterGAN achieved the best results in both visual perception and PIQE (perception based image quality evaluator, a MATLAB build-in function, the lower the score, the higher the image quality) score of enhanced images, scores were 4.91 and 11.75 respectively, better than EnlightenGAN the state-of-the-art. In the enhancement of low illumination sub-dataset Y (containing 2000 images), LighterGAN also achieved the lowest PIQE score of 12.37, 2.85 points lower than second place. Moreover, compared with the CycleGAN, the improvement of generalization ability was also demonstrated. In the test set generated images, LighterGAN was 6.66 percent higher than CycleGAN in subjective authenticity assessment and 3.84 lower in PIQE score, meanwhile, in the whole dataset generated images, the PIQE score of LighterGAN is 11.67, 4.86 lower than CycleGAN.

2021 ◽  
Vol 9 (7) ◽  
pp. 691
Author(s):  
Kai Hu ◽  
Yanwen Zhang ◽  
Chenghang Weng ◽  
Pengsheng Wang ◽  
Zhiliang Deng ◽  
...  

When underwater vehicles work, underwater images are often absorbed by light and scattered and diffused by floating objects, which leads to the degradation of underwater images. The generative adversarial network (GAN) is widely used in underwater image enhancement tasks because it can complete image-style conversions with high efficiency and high quality. Although the GAN converts low-quality underwater images into high-quality underwater images (truth images), the dataset of truth images also affects high-quality underwater images. However, an underwater truth image lacks underwater image enhancement, which leads to a poor effect of the generated image. Thus, this paper proposes to add the natural image quality evaluation (NIQE) index to the GAN to provide generated images with higher contrast and make them more in line with the perception of the human eye, and at the same time, grant generated images a better effect than the truth images set by the existing dataset. In this paper, several groups of experiments are compared, and through the subjective evaluation and objective evaluation indicators, it is verified that the enhanced image of this algorithm is better than the truth image set by the existing dataset.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


2021 ◽  
Vol 30 (01) ◽  
Author(s):  
Jin-Tao Yu ◽  
Rui-Sheng Jia ◽  
Li Gao ◽  
Ruo-Nan Yin ◽  
Hong-Mei Sun ◽  
...  

Information ◽  
2021 ◽  
Vol 13 (1) ◽  
pp. 1
Author(s):  
Rong Du ◽  
Weiwei Li ◽  
Shudong Chen ◽  
Congying Li ◽  
Yong Zhang

Underwater image enhancement recovers degraded underwater images to produce corresponding clear images. Image enhancement methods based on deep learning usually use paired data to train the model, while such paired data, e.g., the degraded images and the corresponding clear images, are difficult to capture simultaneously in the underwater environment. In addition, how to retain the detailed information well in the enhanced image is another critical problem. To solve such issues, we propose a novel unpaired underwater image enhancement method via a cycle generative adversarial network (UW-CycleGAN) to recover the degraded underwater images. Our proposed UW-CycleGAN model includes three main modules: (1) A content loss regularizer is adopted into the generator in CycleGAN, which constrains the detailed information existing in one degraded image to remain in the corresponding generated clear image; (2) A blur-promoting adversarial loss regularizer is introduced into the discriminator to reduce the blur and noise in the generated clear images; (3) We add the DenseNet block to the generator to retain more information of each feature map in the training stage. Finally, experimental results on two unpaired underwater image datasets produced satisfactory performance compared to the state-of-the-art image enhancement methods, which proves the effectiveness of the proposed model.


2021 ◽  
Vol 15 ◽  
Author(s):  
Jingsi Zhang ◽  
Chengdong Wu ◽  
Xiaosheng Yu ◽  
Xiaoliang Lei

With the development of computer vision, high quality images with rich information have great research potential in both daily life and scientific research. However, due to different lighting conditions, surrounding noise and other reasons, the image quality is different, which seriously affects people's discrimination of the information in the image, thus causing unnecessary conflicts and results. Especially in the dark, the images captured by the camera are difficult to identify, and the smart system relies heavily on high-quality input images. The image collected in low-light environment has the characteristic with high noise and color distortion, which makes it difficult to utilize the image and can not fully explore the rich value information of the image. In order to improve the quality of low-light image, this paper proposes a Heterogenous low-light image enhancement method based on DenseNet generative adversarial network. Firstly, the generative network of generative adversarial network is realized by using DenseNet framework. Secondly, the feature map from low light image to normal light image is learned by using the generative adversarial network. Thirdly, the enhancement of low-light image is realized. The experimental results show that, in terms of PSNR, SSIM, NIQE, UQI, NQE and PIQE indexes, compared with the state-of-the-art enhancement algorithms, the values are ideal, the proposed method can improve the image brightness more effectively and reduce the noise of enhanced image.


Author(s):  
Johannes Haubold ◽  
René Hosch ◽  
Lale Umutlu ◽  
Axel Wetter ◽  
Patrizia Haubold ◽  
...  

Abstract Objectives To reduce the dose of intravenous iodine-based contrast media (ICM) in CT through virtual contrast-enhanced images using generative adversarial networks. Methods Dual-energy CTs in the arterial phase of 85 patients were randomly split into an 80/20 train/test collective. Four different generative adversarial networks (GANs) based on image pairs, which comprised one image with virtually reduced ICM and the original full ICM CT slice, were trained, testing two input formats (2D and 2.5D) and two reduced ICM dose levels (−50% and −80%). The amount of intravenous ICM was reduced by creating virtual non-contrast series using dual-energy and adding the corresponding percentage of the iodine map. The evaluation was based on different scores (L1 loss, SSIM, PSNR, FID), which evaluate the image quality and similarity. Additionally, a visual Turing test (VTT) with three radiologists was used to assess the similarity and pathological consistency. Results The −80% models reach an SSIM of > 98%, PSNR of > 48, L1 of between 7.5 and 8, and an FID of between 1.6 and 1.7. In comparison, the −50% models reach a SSIM of > 99%, PSNR of > 51, L1 of between 6.0 and 6.1, and an FID between 0.8 and 0.95. For the crucial question of pathological consistency, only the 50% ICM reduction networks achieved 100% consistency, which is required for clinical use. Conclusions The required amount of ICM for CT can be reduced by 50% while maintaining image quality and diagnostic accuracy using GANs. Further phantom studies and animal experiments are required to confirm these initial results. Key Points • The amount of contrast media required for CT can be reduced by 50% using generative adversarial networks. • Not only the image quality but especially the pathological consistency must be evaluated to assess safety. • A too pronounced contrast media reduction could influence the pathological consistency in our collective at 80%.


Sign in / Sign up

Export Citation Format

Share Document