scholarly journals The Novel Sensor Network Structure for Classification Processing Based on the Machine Learning Method of the ACGAN

Sensors ◽  
2019 ◽  
Vol 19 (14) ◽  
pp. 3145 ◽  
Author(s):  
Yuantao Chen ◽  
Jiajun Tao ◽  
Jin Wang ◽  
Xi Chen ◽  
Jingbo Xie ◽  
...  

To address the problem of unstable training and poor accuracy in image classification algorithms based on generative adversarial networks (GAN), a novel sensor network structure for classification processing using auxiliary classifier generative adversarial networks (ACGAN) is proposed in this paper. Firstly, the real/fake discrimination of sensor samples in the network has been canceled at the output layer of the discriminative network and only the posterior probability estimation of the sample tag is outputted. Secondly, by regarding the real sensor samples as supervised data and the generative sensor samples as labeled fake data, we have reconstructed the loss function of the generator and discriminator by using the real/fake attributes of sensor samples and the cross-entropy loss function of the label. Thirdly, the pooling and caching method has been introduced into the discriminator to enable more effective extraction of the classification features. Finally, feature matching has been added to the discriminative network to ensure the diversity of the generative sensor samples. Experimental results have shown that the proposed algorithm (CP-ACGAN) achieves better classification accuracy on the MNIST dataset, CIFAR10 dataset and CIFAR100 dataset than other solutions. Moreover, when compared with the ACGAN and CNN classification algorithms, which have the same deep network structure as CP-ACGAN, the proposed method continues to achieve better classification effects and stability than other main existing sensor solutions.

2020 ◽  
Vol 10 (5) ◽  
pp. 1729 ◽  
Author(s):  
Yuning Jiang ◽  
Jinhua Li

Objective: Super-resolution reconstruction is an increasingly important area in computer vision. To alleviate the problems that super-resolution reconstruction models based on generative adversarial networks are difficult to train and contain artifacts in reconstruction results, we propose a novel and improved algorithm. Methods: This paper presented TSRGAN (Super-Resolution Generative Adversarial Networks Combining Texture Loss) model which was also based on generative adversarial networks. We redefined the generator network and discriminator network. Firstly, on the network structure, residual dense blocks without excess batch normalization layers were used to form generator network. Visual Geometry Group (VGG)19 network was adopted as the basic framework of discriminator network. Secondly, in the loss function, the weighting of the four loss functions of texture loss, perceptual loss, adversarial loss and content loss was used as the objective function of generator. Texture loss was proposed to encourage local information matching. Perceptual loss was enhanced by employing the features before activation layer to calculate. Adversarial loss was optimized based on WGAN-GP (Wasserstein GAN with Gradient Penalty) theory. Content loss was used to ensure the accuracy of low-frequency information. During the optimization process, the target image information was reconstructed from different angles of high and low frequencies. Results: The experimental results showed that our method made the average Peak Signal to Noise Ratio of reconstructed images reach 27.99 dB and the average Structural Similarity Index reach 0.778 without losing too much speed, which was superior to other comparison algorithms in objective evaluation index. What is more, TSRGAN significantly improved subjective visual evaluations such as brightness information and texture details. We found that it could generate images with more realistic textures and more accurate brightness, which were more in line with human visual evaluation. Conclusions: Our improvements to the network structure could reduce the model’s calculation amount and stabilize the training direction. In addition, the loss function we present for generator could provide stronger supervision for restoring realistic textures and achieving brightness consistency. Experimental results prove the effectiveness and superiority of TSRGAN algorithm.


2021 ◽  
Vol 11 (2) ◽  
pp. 721
Author(s):  
Hyung Yong Kim ◽  
Ji Won Yoon ◽  
Sung Jun Cheon ◽  
Woo Hyun Kang ◽  
Nam Soo Kim

Recently, generative adversarial networks (GANs) have been successfully applied to speech enhancement. However, there still remain two issues that need to be addressed: (1) GAN-based training is typically unstable due to its non-convex property, and (2) most of the conventional methods do not fully take advantage of the speech characteristics, which could result in a sub-optimal solution. In order to deal with these problems, we propose a progressive generator that can handle the speech in a multi-resolution fashion. Additionally, we propose a multi-scale discriminator that discriminates the real and generated speech at various sampling rates to stabilize GAN training. The proposed structure was compared with the conventional GAN-based speech enhancement algorithms using the VoiceBank-DEMAND dataset. Experimental results showed that the proposed approach can make the training faster and more stable, which improves the performance on various metrics for speech enhancement.


2018 ◽  
Vol 10 (7) ◽  
pp. 1123 ◽  
Author(s):  
Yuhang Zhang ◽  
Hao Sun ◽  
Jiawei Zuo ◽  
Hongqi Wang ◽  
Guangluan Xu ◽  
...  

Aircraft type recognition plays an important role in remote sensing image interpretation. Traditional methods suffer from bad generalization performance, while deep learning methods require large amounts of data with type labels, which are quite expensive and time-consuming to obtain. To overcome the aforementioned problems, in this paper, we propose an aircraft type recognition framework based on conditional generative adversarial networks (GANs). First, we design a new method to precisely detect aircrafts’ keypoints, which are used to generate aircraft masks and locate the positions of the aircrafts. Second, a conditional GAN with a region of interest (ROI)-weighted loss function is trained on unlabeled aircraft images and their corresponding masks. Third, an ROI feature extraction method is carefully designed to extract multi-scale features from the GAN in the regions of aircrafts. After that, a linear support vector machine (SVM) classifier is adopted to classify each sample using their features. Benefiting from the GAN, we can learn features which are strong enough to represent aircrafts based on a large unlabeled dataset. Additionally, the ROI-weighted loss function and the ROI feature extraction method make the features more related to the aircrafts rather than the background, which improves the quality of features and increases the recognition accuracy significantly. Thorough experiments were conducted on a challenging dataset, and the results prove the effectiveness of the proposed aircraft type recognition framework.


2020 ◽  
Author(s):  
Fajr Alarsan ◽  
Mamoon Younes

Abstract Generative Adversarial Networks (GANs) are most popular generative frameworks that have achieved compelling performance. They follow an adversarial approach where two deep models generator and discriminator compete with each other In this paper, we propose a Generative Adversarial Network with best hyper-parameters selection to generate fake images for digits number 1 to 9 with generator and train discriminator to decide whereas the generated images are fake or true. Using Genetic Algorithm technique to adapt GAN hyper-parameters, the final method is named GANGA:Generative Adversarial Network with Genetic Algorithm. Anaconda environment with tensorflow library facilitates was used, python as programming language also used with needed libraries. The implementation was done using MNIST dataset to validate our work. The proposed method is to let Genetic algorithm to choose best values of hyper-parameters depending on minimizing a cost function such as a loss function or maximizing accuracy function. GA was used to select values of Learning rate, Batch normalization, Number of neurons and a parameter of Dropout layer.


Diagnostics ◽  
2021 ◽  
Vol 11 (9) ◽  
pp. 1629
Author(s):  
Tsutomu Gomi ◽  
Rina Sakai ◽  
Hidetake Hara ◽  
Yusuke Watanabe ◽  
Shinya Mizukami

In this study, a novel combination of hybrid generative adversarial networks (GANs) comprising cycle-consistent GAN, pix2pix, and (mask pyramid network) MPN (CGpM-metal artifact reduction [MAR]), was developed using projection data to reduce metal artifacts and the radiation dose during digital tomosynthesis. The CGpM-MAR algorithm was compared with the conventional filtered back projection (FBP) without MAR, FBP with MAR, and convolutional neural network MAR. The MAR rates were compared using the artifact index (AI) and Gumbel distribution of the largest variation analysis using a prosthesis phantom at various radiation doses. The novel CGpM-MAR yielded an adequately effective overall performance in terms of AI. The resulting images yielded good results independently of the type of metal used in the prosthesis phantom (p < 0.05) and good artifact removal at 55% radiation-dose reduction. Furthermore, the CGpM-MAR represented the minimum in the model with the largest variation at 55% radiation-dose reduction. Regarding the AI and Gumbel distribution analysis, the novel CGpM-MAR yielded superior MAR when compared with the conventional reconstruction algorithms with and without MAR at 55% radiation-dose reduction and presented features most similar to the reference FBP. CGpM-MAR presents a promising method for metal artifact and radiation-dose reduction in clinical practice.


2020 ◽  
Vol 34 (04) ◽  
pp. 4852-4859
Author(s):  
Jinduo Liu ◽  
Junzhong Ji ◽  
Guangxu Xun ◽  
Liuyi Yao ◽  
Mengdi Huai ◽  
...  

Inferring effective connectivity between different brain regions from functional magnetic resonance imaging (fMRI) data is an important advanced study in neuroinformatics in recent years. However, current methods have limited usage in effective connectivity studies due to the high noise and small sample size of fMRI data. In this paper, we propose a novel framework for inferring effective connectivity based on generative adversarial networks (GAN), named as EC-GAN. The proposed framework EC-GAN infers effective connectivity via an adversarial process, in which we simultaneously train two models: a generator and a discriminator. The generator consists of a set of effective connectivity generators based on structural equation models which can generate the fMRI time series of each brain region via effective connectivity. Meanwhile, the discriminator is employed to distinguish between the joint distributions of the real and generated fMRI time series. Experimental results on simulated data show that EC-GAN can better infer effective connectivity compared to other state-of-the-art methods. The real-world experiments indicate that EC-GAN can provide a new and reliable perspective analyzing the effective connectivity of fMRI data.


Author(s):  
Bingcai Wei ◽  
Liye Zhang ◽  
Kangtao Wang ◽  
Qun Kong ◽  
Zhuang Wang

AbstractExtracting traffic information from images plays an increasingly significant role in Internet of vehicle. However, due to the high-speed movement and bumps of the vehicle, the image will be blurred during image acquisition. In addition, in rainy days, as a result of the rain attached to the lens, the target will be blocked by rain, and the image will be distorted. These problems have caused great obstacles for extracting key information from transportation images, which will affect the real-time judgment of vehicle control system on road conditions, and further cause decision-making errors of the system and even have a bearing on traffic accidents. In this paper, we propose a motion-blurred restoration and rain removal algorithm for IoV based on generative adversarial network and transfer learning. Dynamic scene deblurring and image de-raining are both among the challenging classical research directions in low-level vision tasks. For both tasks, firstly, instead of using ReLU in a conventional residual block, we designed a residual block containing three 256-channel convolutional layers, and we used the Leaky-ReLU activation function. Secondly, we used generative adversarial networks for the image deblurring task with our Resblocks, as well as the image de-raining task. Thirdly, experimental results on the synthetic blur dataset GOPRO and the real blur dataset RealBlur confirm the effectiveness of our model for image deblurring. Finally, as an image de-raining task based on transfer learning, we can fine-tune the pre-trained model with less training data and show good results on several datasets used for image rain removal.


2021 ◽  
pp. 1-38
Author(s):  
Himesh Bhatia ◽  
William Paul ◽  
Fady Alajaji ◽  
Bahman Gharesifard ◽  
Philippe Burlina

Abstract We investigate the use of parameterized families of information-theoretic measures to generalize the loss functions of generative adversarial networks (GANs) with the objective of improving performance. A new generator loss function, least kth-order GAN (LkGAN), is introduced, generalizing the least squares GANs (LSGANs) by using a kth-order absolute error distortion measure with k≥1 (which recovers the LSGAN loss function when k=2). It is shown that minimizing this generalized loss function under an (unconstrained) optimal discriminator is equivalent to minimizing the kth-order Pearson-Vajda divergence. Another novel GAN generator loss function is next proposed in terms of Rényi cross-entropy functionals with order α&gt;0, α≠1. It is demonstrated that this Rényi-centric generalized loss function, which provably reduces to the original GAN loss function as α→1, preserves the equilibrium point satisfied by the original GAN based on the Jensen-Rényi divergence, a natural extension of the Jensen-Shannon divergence. Experimental results indicate that the proposed loss functions, applied to the MNIST and CelebA data sets, under both DCGAN and StyleGAN architectures, confer performance benefits by virtue of the extra degrees of freedom provided by the parameters k and α, respectively. More specifically, experiments show improvements with regard to the quality of the generated images as measured by the Fréchet inception distance score and training stability. While it was applied to GANs in this study, the proposed approach is generic and can be used in other applications of information theory to deep learning, for example, the issues of fairness or privacy in artificial intelligence.


PLoS ONE ◽  
2021 ◽  
Vol 16 (11) ◽  
pp. e0260308
Author(s):  
Mauro Castelli ◽  
Luca Manzoni ◽  
Tatiane Espindola ◽  
Aleš Popovič ◽  
Andrea De Lorenzo

Wireless networks are among the fundamental technologies used to connect people. Considering the constant advancements in the field, telecommunication operators must guarantee a high-quality service to keep their customer portfolio. To ensure this high-quality service, it is common to establish partnerships with specialized technology companies that deliver software services in order to monitor the networks and identify faults and respective solutions. A common barrier faced by these specialized companies is the lack of data to develop and test their products. This paper investigates the use of generative adversarial networks (GANs), which are state-of-the-art generative models, for generating synthetic telecommunication data related to Wi-Fi signal quality. We developed, trained, and compared two of the most used GAN architectures: the Vanilla GAN and the Wasserstein GAN (WGAN). Both models presented satisfactory results and were able to generate synthetic data similar to the real ones. In particular, the distribution of the synthetic data overlaps the distribution of the real data for all of the considered features. Moreover, the considered generative models can reproduce the same associations observed for the synthetic features. We chose the WGAN as the final model, but both models are suitable for addressing the problem at hand.


Sign in / Sign up

Export Citation Format

Share Document