scholarly journals RANDOM NEURAL NETWORK LEARNING HEURISTICS – CORRIGENDUM

2017 ◽  
Vol 32 (3) ◽  
pp. 482-482
Author(s):  
Abbas Javed ◽  
Hadi Larijani ◽  
Ali Ahmadinia ◽  
Rohinton Emmanuel
2017 ◽  
Vol 31 (4) ◽  
pp. 436-456 ◽  
Author(s):  
Abbas Javed ◽  
Hadi Larijani ◽  
Ali Ahmadinia ◽  
Rohinton Emmanuel

The random neural network (RNN) is a probabilitsic queueing theory-based model for artificial neural networks, and it requires the use of optimization algorithms for training. Commonly used gradient descent learning algorithms may reside in local minima, evolutionary algorithms can be also used to avoid local minima. Other techniques such as artificial bee colony (ABC), particle swarm optimization (PSO), and differential evolution algorithms also perform well in finding the global minimum but they converge slowly. The sequential quadratic programming (SQP) optimization algorithm can find the optimum neural network weights, but can also get stuck in local minima. We propose to overcome the shortcomings of these various approaches by using hybridized ABC/PSO and SQP. The resulting algorithm is shown to compare favorably with other known techniques for training the RNN. The results show that hybrid ABC learning with SQP outperforms other training algorithms in terms of mean-squared error and normalized root-mean-squared error.


2011 ◽  
Vol 131 (11) ◽  
pp. 1889-1894
Author(s):  
Yuta Tsuchida ◽  
Michifumi Yoshioka

Entropy ◽  
2021 ◽  
Vol 23 (6) ◽  
pp. 711
Author(s):  
Mina Basirat ◽  
Bernhard C. Geiger ◽  
Peter M. Roth

Information plane analysis, describing the mutual information between the input and a hidden layer and between a hidden layer and the target over time, has recently been proposed to analyze the training of neural networks. Since the activations of a hidden layer are typically continuous-valued, this mutual information cannot be computed analytically and must thus be estimated, resulting in apparently inconsistent or even contradicting results in the literature. The goal of this paper is to demonstrate how information plane analysis can still be a valuable tool for analyzing neural network training. To this end, we complement the prevailing binning estimator for mutual information with a geometric interpretation. With this geometric interpretation in mind, we evaluate the impact of regularization and interpret phenomena such as underfitting and overfitting. In addition, we investigate neural network learning in the presence of noisy data and noisy labels.


1994 ◽  
Vol 04 (01) ◽  
pp. 23-51 ◽  
Author(s):  
JEROEN DEHAENE ◽  
JOOS VANDEWALLE

A number of matrix flows, based on isospectral and isodirectional flows, is studied and modified for the purpose of local implementability on a network structure. The flows converge to matrices with a predefined spectrum and eigenvectors which are determined by an external signal. The flows can be useful for adaptive signal processing applications and are applied to neural network learning.


Sign in / Sign up

Export Citation Format

Share Document