scholarly journals Convergence of Batch Split-Complex Backpropagation Algorithm for Complex-Valued Neural Networks

2009 ◽  
Vol 2009 ◽  
pp. 1-16 ◽  
Author(s):  
Huisheng Zhang ◽  
Chao Zhang ◽  
Wei Wu

The batch split-complex backpropagation (BSCBP) algorithm for training complex-valued neural networks is considered. For constant learning rate, it is proved that the error function of BSCBP algorithm is monotone during the training iteration process, and the gradient of the error function tends to zero. By adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical analysis.

2005 ◽  
Vol 15 (06) ◽  
pp. 435-443 ◽  
Author(s):  
XIAOMING CHEN ◽  
ZHENG TANG ◽  
CATHERINE VARIAPPAN ◽  
SONGSONG LI ◽  
TOSHIMI OKADA

The complex-valued backpropagation algorithm has been widely used in fields of dealing with telecommunications, speech recognition and image processing with Fourier transformation. However, the local minima problem usually occurs in the process of learning. To solve this problem and to speed up the learning process, we propose a modified error function by adding a term to the conventional error function, which is corresponding to the hidden layer error. The simulation results show that the proposed algorithm is capable of preventing the learning from sticking into the local minima and of speeding up the learning.


2010 ◽  
Vol 22 (10) ◽  
pp. 2655-2677 ◽  
Author(s):  
Dongpo Xu ◽  
Huisheng Zhang ◽  
Lijun Liu

This letter presents a unified convergence analysis of the split-complex nonlinear gradient descent (SCNGD) learning algorithms for complex-valued recurrent neural networks, covering three classes of SCNGD algorithms: standard SCNGD, normalized SCNGD, and adaptive normalized SCNGD. We prove that if the activation functions are of split-complex type and some conditions are satisfied, the error function is monotonically decreasing during the training iteration process, and the gradients of the error function with respect to the real and imaginary parts of the weights converge to zero. A strong convergence result is also obtained under the assumption that the error function has only a finite number of stationary points. The simulation results are given to support the theoretical analysis.


2021 ◽  
pp. 1-20
Author(s):  
Shao-Qun Zhang ◽  
Zhi-Hua Zhou

Abstract Current neural networks are mostly built on the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons. This letter proposes the flexible transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity. The FT model employs a pair of parameters to model the neurotransmitters between neurons and puts up a neuron-exclusive variable to record the regulated neurotrophin density. Thus, the FT model can be formulated as a two-variable, two-valued function, taking the commonly used MP neuron model as its particular case. This modeling manner makes the FT model biologically more realistic and capable of handling complicated data, even spatiotemporal data. To exhibit its power and potential, we present the flexible transmitter network (FTNet), which is built on the most common fully connected feedforward architecture taking the FT model as the basic building block. FTNet allows gradient calculation and can be implemented by an improved backpropagation algorithm in the complex-valued domain. Experiments on a broad range of tasks show that FTNet has power and potential in processing spatiotemporal data. This study provides an alternative basic building block in neural networks and exhibits the feasibility of developing artificial neural networks with neuronal plasticity.


2010 ◽  
Vol 2010 ◽  
pp. 1-27
Author(s):  
Huisheng Zhang ◽  
Dongpo Xu ◽  
Zhiping Wang

The online gradient method has been widely used in training neural networks. We consider in this paper an online split-complex gradient algorithm for complex-valued neural networks. We choose an adaptive learning rate during the training procedure. Under certain conditions, by firstly showing the monotonicity of the error function, it is proved that the gradient of the error function tends to zero and the weight sequence tends to a fixed point. A numerical example is given to support the theoretical findings.


1996 ◽  
Vol 8 (2) ◽  
pp. 451-460 ◽  
Author(s):  
Georg Thimm ◽  
Perry Moerland ◽  
Emile Fiesler

The backpropagation algorithm is widely used for training multilayer neural networks. In this publication the gain of its activation function(s) is investigated. In specific, it is proven that changing the gain of the activation function is equivalent to changing the learning rate and the weights. This simplifies the backpropagation learning rule by eliminating one of its parameters. The theorem can be extended to hold for some well-known variations on the backpropagation algorithm, such as using a momentum term, flat spot elimination, or adaptive gain. Furthermore, it is successfully applied to compensate for the nonstandard gain of optical sigmoids for optical neural networks.


Sign in / Sign up

Export Citation Format

Share Document