A nonlinear training set superposition filter derived by neural network training methods for implementation in a shift-invariant optical correlator

Author(s):  
Ioannis Kypraios ◽  
Rupert C. D. Young ◽  
Philip M. Birch ◽  
Christopher R. Chatwin
Author(s):  
Fei Long ◽  
Fen Liu ◽  
Xiangli Peng ◽  
Zheng Yu ◽  
Huan Xu ◽  
...  

In order to improve the electrical quality disturbance recognition ability of the neural network, this paper studies a depth learning-based power quality disturbance recognition and classification method: constructing a power quality perturbation model, generating training set; construct depth neural network; profit training set to depth neural network training; verify the performance of the depth neural network; the results show that the training set is randomly added 20DB-50DB noise, even in the most serious 20dB noise conditions, it can reach more than 99% identification, this is a tradition. The method is impossible to implement. Conclusion: the deepest learning-based power quality disturbance identification and classification method overcomes the disadvantage of the selection steps of artificial characteristics, poor robustness, which is beneficial to more accurately and quickly discover the category of power quality issues.


2012 ◽  
Vol 2012 ◽  
pp. 1-9 ◽  
Author(s):  
Ioannis E. Livieris ◽  
Panagiotis Pintelas

Conjugate gradient methods constitute excellent neural network training methods characterized by their simplicity, numerical efficiency, and their very low memory requirements. In this paper, we propose a conjugate gradient neural network training algorithm which guarantees sufficient descent using any line search, avoiding thereby the usually inefficient restarts. Moreover, it achieves a high-order accuracy in approximating the second-order curvature information of the error surface by utilizing the modified secant condition proposed by Li et al. (2007). Under mild conditions, we establish that the proposed method is globally convergent for general functions under the strong Wolfe conditions. Experimental results provide evidence that our proposed method is preferable and in general superior to the classical conjugate gradient methods and has a potential to significantly enhance the computational efficiency and robustness of the training process.


MENDEL ◽  
2017 ◽  
Vol 23 (1) ◽  
pp. 41-48
Author(s):  
Marco Castellani ◽  
Rahul Lalchandani

This paper investigates the effectiveness and efficiency of two competitive (predator-prey) evolutionaryprocedures for training multi-layer perceptron classifiers: Co-Adaptive Neural Network Training, and a modifiedversion of Co-Evolutionary Neural Network Training. The study focused on how the performance of the two procedures varies as the size of the training set increases, and their ability to redress class imbalance problems of increasing severity. Compared to the customary backpropagation algorithm and a standard evolutionary algorithm, the two competitive procedures excelled in terms of quality of the solutions and execution speed. Co-Adaptive Neural Network Training excelled on class imbalance problems, and on classification problems of moderately large training sets. Co-Evolutionary Neural Network Training performed best on the largest data sets. The size of the training set was the most problematic issue for the backpropagation algorithm and the standard evolutionary algorithm, respectively in terms of accuracy of the solutions and execution speed. Backpropagation and the evolutionary algorithm were also not competitive on the class imbalance problems, where data oversampling could only partially remedy their shortcomings.


Author(s):  
Yasufumi Sakai ◽  
Yutaka Tamiya

AbstractRecent advances in deep neural networks have achieved higher accuracy with more complex models. Nevertheless, they require much longer training time. To reduce the training time, training methods using quantized weight, activation, and gradient have been proposed. Neural network calculation by integer format improves the energy efficiency of hardware for deep learning models. Therefore, training methods for deep neural networks with fixed point format have been proposed. However, the narrow data representation range of the fixed point format degrades neural network accuracy. In this work, we propose a new fixed point format named shifted dynamic fixed point (S-DFP) to prevent accuracy degradation in quantized neural networks training. S-DFP can change the data representation range of dynamic fixed point format by adding bias to the exponent. We evaluated the effectiveness of S-DFP for quantized neural network training on the ImageNet task using ResNet-34, ResNet-50, ResNet-101 and ResNet-152. For example, the accuracy of quantized ResNet-152 is improved from 76.6% with conventional 8-bit DFP to 77.6% with 8-bit S-DFP.


Author(s):  
Sohrab Khanmohammadi ◽  
Sayyed Mahdi Hosseini

In this paper a new way for neural network training is introduced where the output of middle (hidden) layer of neural network is used to update weights in a competition procedure. Output layer’s weights are modified with multi layer perceptron (MLP) policy. This learning method is applied to two systems as case studies. First one is the monitoring of industrial machine where the results are compared with other training methods such as MLP or Radial Basis Function (RBF). Oil analysis data is used for condition monitoring. The data is gathered by using ten stages technique. The second one is the Stock prediction where the data are highly nonlinear and normally unpredictable especially when the markets are affected by political facts. The simulation results are analyzed and compared with other methods.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Stephen Whitelam ◽  
Viktor Selin ◽  
Sang-Won Park ◽  
Isaac Tamblyn

AbstractWe show analytically that training a neural network by conditioned stochastic mutation or neuroevolution of its weights is equivalent, in the limit of small mutations, to gradient descent on the loss function in the presence of Gaussian white noise. Averaged over independent realizations of the learning process, neuroevolution is equivalent to gradient descent on the loss function. We use numerical simulation to show that this correspondence can be observed for finite mutations, for shallow and deep neural networks. Our results provide a connection between two families of neural-network training methods that are usually considered to be fundamentally different.


Sign in / Sign up

Export Citation Format

Share Document