scholarly journals Dynamic Scene Deblurring and Image De-Raining Based on Generative Adversarial Networks and Transfer Learning for Internet of Vehicle

Author(s):  
Bingcai Wei ◽  
liye zhang ◽  
Kangtao Wang ◽  
Qun Kong ◽  
Zhuang Wang

Abstract Extracting traffic information from images plays an important role in Internet of Vehicle (IoV). However, due to the high-speed movement and bumpiness of the vehicle, motion blur will occur in image acquisition. In addition, in rainy days, because the rain is attached to the lens, the target will be blocked by rain, and the image will be distorted. These problems have caused great obstacles for extracting key information from transportation images, which will affect the real-time judgment of vehicle control system on road conditions, and further cause decision-making errors of the system and even cause traffic accidents. In this paper, we propose a motion blurred restoration and rain removal algorithm for IoV based on Generative Adversarial Network (GAN) and transfer learning. Dynamic scene deblurring and image de-raining are both among the challenging classical tasks in low-level vision tasks. For both tasks, firstly, instead of using ReLU in a conventional residual block, we designed a residual block containing three 256-channel convolutional layers, and we used the Leakly-ReLU activation function. Secondly, we used generative adversarial networks for the image deblurring task with our Resblock, as well as the image de-raining task. Thirdly, experimental results on the synthetic blur dataset GOPRO and the real blur dataset RealBlur confirm the effectiveness of our model for image deblurring. Finally, we can use the pre-trained model for the transfer learning-based image de-raining task and show good results on several datasets.

Author(s):  
Bingcai Wei ◽  
Liye Zhang ◽  
Kangtao Wang ◽  
Qun Kong ◽  
Zhuang Wang

AbstractExtracting traffic information from images plays an increasingly significant role in Internet of vehicle. However, due to the high-speed movement and bumps of the vehicle, the image will be blurred during image acquisition. In addition, in rainy days, as a result of the rain attached to the lens, the target will be blocked by rain, and the image will be distorted. These problems have caused great obstacles for extracting key information from transportation images, which will affect the real-time judgment of vehicle control system on road conditions, and further cause decision-making errors of the system and even have a bearing on traffic accidents. In this paper, we propose a motion-blurred restoration and rain removal algorithm for IoV based on generative adversarial network and transfer learning. Dynamic scene deblurring and image de-raining are both among the challenging classical research directions in low-level vision tasks. For both tasks, firstly, instead of using ReLU in a conventional residual block, we designed a residual block containing three 256-channel convolutional layers, and we used the Leaky-ReLU activation function. Secondly, we used generative adversarial networks for the image deblurring task with our Resblocks, as well as the image de-raining task. Thirdly, experimental results on the synthetic blur dataset GOPRO and the real blur dataset RealBlur confirm the effectiveness of our model for image deblurring. Finally, as an image de-raining task based on transfer learning, we can fine-tune the pre-trained model with less training data and show good results on several datasets used for image rain removal.


Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3075 ◽  
Author(s):  
Baoqing Guo ◽  
Gan Geng ◽  
Liqiang Zhu ◽  
Hongmei Shi ◽  
Zujun Yu

Foreign object intrusion is a great threat to high-speed railway safety operations. Accurate foreign object intrusion detection is particularly important. As a result of the lack of intruding foreign object samples during the operational period, artificially generated ones will greatly benefit the development of the detection methods. In this paper, we propose a novel method to generate railway intruding object images based on an improved conditional deep convolutional generative adversarial network (C-DCGAN). It consists of a generator and multi-scale discriminators. Loss function is also improved so as to generate samples with a high quality and authenticity. The generator is extracted in order to generate foreign object images from input semantic labels. We synthesize the generated objects to the railway scene. To make the generated objects more similar to real objects, on scale in different positions of a railway scene, a scale estimation algorithm based on the gauge constant is proposed. The experimental results on the railway intruding object dataset show that the proposed C-DCGAN model outperforms several state-of-the-art methods and achieves a higher quality (the pixel-wise accuracy, mean intersection-over-union (mIoU), and mean average precision (mAP) are 80.46%, 0.65, and 0.69, respectively) and diversity (the Fréchet-Inception Distance (FID) score is 26.87) of generated samples. The mIoU of the real-generated pedestrian pairs reaches 0.85, and indicates a higher scale of accuracy for the generated intruding objects in the railway scene.


2020 ◽  
Vol 2020 ◽  
pp. 1-11
Author(s):  
Fangming Bi ◽  
Zijian Man ◽  
Yang Xia ◽  
Wei Liu ◽  
Wenjia Yang ◽  
...  

Generative adversarial networks are currently used to solve various problems and are one of the most popular models. Generator and discriminator are characteristics of continuous game process in training. While improving the quality of generated pictures, it will also make it difficult for the loss function to be stable, and the training speed will be extremely slow compared with other methods. In addition, since the generative adversarial networks directly learns the data distribution of samples, the model will become uncontrollable and the freedom of the model will become too large when the original data distribution is constantly approximated. A new transfer learning training idea for the unsupervised generation model is proposed based on the generation network. The decoder of trained variational autoencoders is used as the network architecture and parameters to generative adversarial network generator. In addition, the standard normal distribution is obtained by sampling and then input into the model to control the degree of freedom of the model. Finally, we evaluated our method on using the MNIST, CIFAR10, and LSUN datasets. The experiment shows that our proposed method can make the loss function converge as quickly as possible and increase the model accuracy.


2017 ◽  
Author(s):  
Benjamin Sanchez-Lengeling ◽  
Carlos Outeiral ◽  
Gabriel L. Guimaraes ◽  
Alan Aspuru-Guzik

Molecular discovery seeks to generate chemical species tailored to very specific needs. In this paper, we present ORGANIC, a framework based on Objective-Reinforced Generative Adversarial Networks (ORGAN), capable of producing a distribution over molecular space that matches with a certain set of desirable metrics. This methodology combines two successful techniques from the machine learning community: a Generative Adversarial Network (GAN), to create non-repetitive sensible molecular species, and Reinforcement Learning (RL), to bias this generative distribution towards certain attributes. We explore several applications, from optimization of random physicochemical properties to candidates for drug discovery and organic photovoltaic material design.


2021 ◽  
Vol 11 (15) ◽  
pp. 7034
Author(s):  
Hee-Deok Yang

Artificial intelligence technologies and vision systems are used in various devices, such as automotive navigation systems, object-tracking systems, and intelligent closed-circuit televisions. In particular, outdoor vision systems have been applied across numerous fields of analysis. Despite their widespread use, current systems work well under good weather conditions. They cannot account for inclement conditions, such as rain, fog, mist, and snow. Images captured under inclement conditions degrade the performance of vision systems. Vision systems need to detect, recognize, and remove noise because of rain, snow, and mist to boost the performance of the algorithms employed in image processing. Several studies have targeted the removal of noise resulting from inclement conditions. We focused on eliminating the effects of raindrops on images captured with outdoor vision systems in which the camera was exposed to rain. An attentive generative adversarial network (ATTGAN) was used to remove raindrops from the images. This network was composed of two parts: an attentive-recurrent network and a contextual autoencoder. The ATTGAN generated an attention map to detect rain droplets. A de-rained image was generated by increasing the number of attentive-recurrent network layers. We increased the number of visual attentive-recurrent network layers in order to prevent gradient sparsity so that the entire generation was more stable against the network without preventing the network from converging. The experimental results confirmed that the extended ATTGAN could effectively remove various types of raindrops from images.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 4953
Author(s):  
Sara Al-Emadi ◽  
Abdulla Al-Ali ◽  
Abdulaziz Al-Ali

Drones are becoming increasingly popular not only for recreational purposes but in day-to-day applications in engineering, medicine, logistics, security and others. In addition to their useful applications, an alarming concern in regard to the physical infrastructure security, safety and privacy has arisen due to the potential of their use in malicious activities. To address this problem, we propose a novel solution that automates the drone detection and identification processes using a drone’s acoustic features with different deep learning algorithms. However, the lack of acoustic drone datasets hinders the ability to implement an effective solution. In this paper, we aim to fill this gap by introducing a hybrid drone acoustic dataset composed of recorded drone audio clips and artificially generated drone audio samples using a state-of-the-art deep learning technique known as the Generative Adversarial Network. Furthermore, we examine the effectiveness of using drone audio with different deep learning algorithms, namely, the Convolutional Neural Network, the Recurrent Neural Network and the Convolutional Recurrent Neural Network in drone detection and identification. Moreover, we investigate the impact of our proposed hybrid dataset in drone detection. Our findings prove the advantage of using deep learning techniques for drone detection and identification while confirming our hypothesis on the benefits of using the Generative Adversarial Networks to generate real-like drone audio clips with an aim of enhancing the detection of new and unfamiliar drones.


Author(s):  
Lingyu Yan ◽  
Jiarun Fu ◽  
Chunzhi Wang ◽  
Zhiwei Ye ◽  
Hongwei Chen ◽  
...  

AbstractWith the development of image recognition technology, face, body shape, and other factors have been widely used as identification labels, which provide a lot of convenience for our daily life. However, image recognition has much higher requirements for image conditions than traditional identification methods like a password. Therefore, image enhancement plays an important role in the process of image analysis for images with noise, among which the image of low-light is the top priority of our research. In this paper, a low-light image enhancement method based on the enhanced network module optimized Generative Adversarial Networks(GAN) is proposed. The proposed method first applied the enhancement network to input the image into the generator to generate a similar image in the new space, Then constructed a loss function and minimized it to train the discriminator, which is used to compare the image generated by the generator with the real image. We implemented the proposed method on two image datasets (DPED, LOL), and compared it with both the traditional image enhancement method and the deep learning approach. Experiments showed that our proposed network enhanced images have higher PNSR and SSIM, the overall perception of relatively good quality, demonstrating the effectiveness of the method in the aspect of low illumination image enhancement.


2021 ◽  
Vol 11 (2) ◽  
pp. 721
Author(s):  
Hyung Yong Kim ◽  
Ji Won Yoon ◽  
Sung Jun Cheon ◽  
Woo Hyun Kang ◽  
Nam Soo Kim

Recently, generative adversarial networks (GANs) have been successfully applied to speech enhancement. However, there still remain two issues that need to be addressed: (1) GAN-based training is typically unstable due to its non-convex property, and (2) most of the conventional methods do not fully take advantage of the speech characteristics, which could result in a sub-optimal solution. In order to deal with these problems, we propose a progressive generator that can handle the speech in a multi-resolution fashion. Additionally, we propose a multi-scale discriminator that discriminates the real and generated speech at various sampling rates to stabilize GAN training. The proposed structure was compared with the conventional GAN-based speech enhancement algorithms using the VoiceBank-DEMAND dataset. Experimental results showed that the proposed approach can make the training faster and more stable, which improves the performance on various metrics for speech enhancement.


2021 ◽  
Vol 204 ◽  
pp. 79-89
Author(s):  
Borja Espejo-Garcia ◽  
Nikos Mylonas ◽  
Loukas Athanasakos ◽  
Eleanna Vali ◽  
Spyros Fountas

Sign in / Sign up

Export Citation Format

Share Document