scholarly journals Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Author(s):  
Chuan Li ◽  
Michael Wand
2021 ◽  
Vol 36 (2) ◽  
pp. 85
Author(s):  
Guang Cheng ◽  
Jian Gong ◽  
Zechen Wang ◽  
Xiangjun Liu ◽  
Xinran Li ◽  
...  

2018 ◽  
Vol 8 (12) ◽  
pp. 2664 ◽  
Author(s):  
Caidan Zhao ◽  
Caiyun Chen ◽  
Zeping He ◽  
Zhiqiang Wu

Recently, many studies have reported on image synthesis based on Generative Adversarial Networks (GAN). However, the use of GAN does not provide much attention on the signal classification problem. In the context of using wireless signals to classify illegal Unmanned Aerial Vehicles (UAVs), this paper explores the feasibility of using GAN to improve the training datasets and obtain a better classification model, thereby improving the accuracy of classification. First, we use the generative model of GAN to generate a large datasets, which does not need manual annotation. At the same time, the discriminative model of GAN is improved to classify the types of signals based on the loss function of the discriminative model. Finally, this model can be used to the outdoor environment and obtain a real-time illegal UAVs signal classification system. Our experiments confirmed that the improvements on the Auxiliary Classifier Generative Adversarial Networks (AC-GANs) by limited datasets achieve excellent results. The recognition rate can reach more than 95% in the indoor environment, and this method is also applicable in the outdoor environment. Moreover, based on the theory of Wasserstein GANs (WGAN) and AC-GANs, a more robust Auxiliary Classifier Wasserstein GANs (AC-WGANs) model is obtained, which is suitable for multi-class UAVs. Through the combination of AC-WGANs and Universal Software Radio Peripheral (USRP) B210 software defined radio (SDR) platform, a real-time UAVs signal classification system is also implemented.


2021 ◽  
Vol 11 (5) ◽  
pp. 2214
Author(s):  
Prasad Hettiarachchi ◽  
Rashmika Nawaratne ◽  
Damminda Alahakoon ◽  
Daswin De Silva ◽  
Naveen Chilamkurti

Rapid developments in urbanization and smart city environments have accelerated the need to deliver safe, sustainable, and effective resource utilization and service provision and have thereby enhanced the need for intelligent, real-time video surveillance. Recent advances in machine learning and deep learning have the capability to detect and localize salient objects in surveillance video streams; however, several practical issues remain unaddressed, such as diverse weather conditions, recording conditions, and motion blur. In this context, image de-raining is an important issue that has been investigated extensively in recent years to provide accurate and quality surveillance in the smart city domain. Existing deep convolutional neural networks have obtained great success in image translation and other computer vision tasks; however, image de-raining is ill posed and has not been addressed in real-time, intelligent video surveillance systems. In this work, we propose to utilize the generative capabilities of recently introduced conditional generative adversarial networks (cGANs) as an image de-raining approach. We utilize the adversarial loss in GANs that provides an additional component to the loss function, which in turn regulates the final output and helps to yield better results. Experiments on both real and synthetic data show that the proposed method outperforms most of the existing state-of-the-art models in terms of quantitative evaluations and visual appearance.


2018 ◽  
Vol 12 (5) ◽  
pp. 596-602 ◽  
Author(s):  
Wenkai Chang ◽  
Guodong Yang ◽  
Junzhi Yu ◽  
Zize Liang

Sign in / Sign up

Export Citation Format

Share Document