Robust adaptive learning of feedforward neural networks via LMI optimizations

2012 ◽  
Vol 31 ◽  
pp. 33-45 ◽  
Author(s):  
Xingjian Jing
2022 ◽  
Vol 12 (1) ◽  
pp. 0-0

The learning process of artificial neural networks is an important and complex task in the supervised learning field. The main difficulty of training a neural network is the process of fine-tuning the best set of control parameters in terms of weight and bias. This paper presents a new training method based on hybrid particle swarm optimization with Multi-Verse Optimization (PMVO) to train the feedforward neural networks. The hybrid algorithm is utilized to search better in solution space which proves its efficiency in reducing the problems of trapping in local minima. The performance of the proposed approach was compared with five evolutionary techniques and the standard momentum backpropagation and adaptive learning rate. The comparison was benchmarked and evaluated using six bio-medical datasets. The results of the comparative study show that PMVO outperformed other training methods in most datasets and can be an alternative to other training methods.


2006 ◽  
Vol 16 (07) ◽  
pp. 1929-1950 ◽  
Author(s):  
GEORGE D. MAGOULAS ◽  
MICHAEL N. VRAHATIS

Networks of neurons can perform computations that even modern computers find very difficult to simulate. Most of the existing artificial neurons and artificial neural networks are considered biologically unrealistic, nevertheless the practical success of the backpropagation algorithm and the powerful capabilities of feedforward neural networks have made neural computing very popular in several application areas. A challenging issue in this context is learning internal representations by adjusting the weights of the network connections. To this end, several first-order and second-order algorithms have been proposed in the literature. This paper provides an overview of approaches to backpropagation training, emphazing on first-order adaptive learning algorithms that build on the theory of nonlinear optimization, and proposes a framework for their analysis in the context of deterministic optimization.


1998 ◽  
Vol 10 (4) ◽  
pp. 1007-1030 ◽  
Author(s):  
J. Manuel Torres Moreno ◽  
Mirta B. Gordon

This article presents a new incremental learning algorithm for classification tasks, called Net Lines, which is well adapted for both binary and real-valued input patterns. It generates small, compact feedforward neural networks with one hidden layer of binary units and binary output units. A convergence theorem ensures that solutions with a finite number of hidden units exist for both binary and real-valued input patterns. An implementation for problems with more than two classes, valid for any binary classifier, is proposed. The generalization error and the size of the resulting networks are compared to the best published results on well-known classification benchmarks. Early stopping is shown to decrease overfitting, without improving the generalization performance.


Sign in / Sign up

Export Citation Format

Share Document