Evolution of neural network training set through addition of virtual samples

Author(s):  
Sungzoon Cho ◽  
Keonhoe Cha
Author(s):  
Fei Long ◽  
Fen Liu ◽  
Xiangli Peng ◽  
Zheng Yu ◽  
Huan Xu ◽  
...  

In order to improve the electrical quality disturbance recognition ability of the neural network, this paper studies a depth learning-based power quality disturbance recognition and classification method: constructing a power quality perturbation model, generating training set; construct depth neural network; profit training set to depth neural network training; verify the performance of the depth neural network; the results show that the training set is randomly added 20DB-50DB noise, even in the most serious 20dB noise conditions, it can reach more than 99% identification, this is a tradition. The method is impossible to implement. Conclusion: the deepest learning-based power quality disturbance identification and classification method overcomes the disadvantage of the selection steps of artificial characteristics, poor robustness, which is beneficial to more accurately and quickly discover the category of power quality issues.


MENDEL ◽  
2017 ◽  
Vol 23 (1) ◽  
pp. 41-48
Author(s):  
Marco Castellani ◽  
Rahul Lalchandani

This paper investigates the effectiveness and efficiency of two competitive (predator-prey) evolutionaryprocedures for training multi-layer perceptron classifiers: Co-Adaptive Neural Network Training, and a modifiedversion of Co-Evolutionary Neural Network Training. The study focused on how the performance of the two procedures varies as the size of the training set increases, and their ability to redress class imbalance problems of increasing severity. Compared to the customary backpropagation algorithm and a standard evolutionary algorithm, the two competitive procedures excelled in terms of quality of the solutions and execution speed. Co-Adaptive Neural Network Training excelled on class imbalance problems, and on classification problems of moderately large training sets. Co-Evolutionary Neural Network Training performed best on the largest data sets. The size of the training set was the most problematic issue for the backpropagation algorithm and the standard evolutionary algorithm, respectively in terms of accuracy of the solutions and execution speed. Backpropagation and the evolutionary algorithm were also not competitive on the class imbalance problems, where data oversampling could only partially remedy their shortcomings.


2015 ◽  
Vol 11 (2) ◽  
pp. 94-98
Author(s):  
Vitaly M Tatyankin ◽  
Irina S Dyubko

The article discusses approaches to the formation of the training sample in the problem of recognition of monochrome images. It is shown that the variation of the training sample, allows to reduce the error of neural network training. Practical recommendations for the formation of a training sample.


2014 ◽  
pp. 58-69
Author(s):  
Iryna Turchenko

A simulation model of a section of mine ventilation network is considered in this paper. The simulation modeling of transient aerogasdynamic processes of methane concentration changing is fulfilled at applying position and exponential control influences. There is proposed a neural-based method of control influences forming by neural network training on the set of optimal control influences. There are defined a criterion and developed an algorithm of optimal control influences forming as a training set of neural network. The simulation modeling of applying of control influences formed by neural network is fulfilled and decreasing of control parameter in the section of mine ventilation network is estimated.


Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


Sign in / Sign up

Export Citation Format

Share Document