scholarly journals Flexible Transmitter Network

2021 ◽  
pp. 1-20
Author(s):  
Shao-Qun Zhang ◽  
Zhi-Hua Zhou

Abstract Current neural networks are mostly built on the MP model, which usually formulates the neuron as executing an activation function on the real-valued weighted aggregation of signals received from other neurons. This letter proposes the flexible transmitter (FT) model, a novel bio-plausible neuron model with flexible synaptic plasticity. The FT model employs a pair of parameters to model the neurotransmitters between neurons and puts up a neuron-exclusive variable to record the regulated neurotrophin density. Thus, the FT model can be formulated as a two-variable, two-valued function, taking the commonly used MP neuron model as its particular case. This modeling manner makes the FT model biologically more realistic and capable of handling complicated data, even spatiotemporal data. To exhibit its power and potential, we present the flexible transmitter network (FTNet), which is built on the most common fully connected feedforward architecture taking the FT model as the basic building block. FTNet allows gradient calculation and can be implemented by an improved backpropagation algorithm in the complex-valued domain. Experiments on a broad range of tasks show that FTNet has power and potential in processing spatiotemporal data. This study provides an alternative basic building block in neural networks and exhibits the feasibility of developing artificial neural networks with neuronal plasticity.

2004 ◽  
Vol 16 (12) ◽  
pp. 2699-2713 ◽  
Author(s):  
Su Lee Goh ◽  
Danilo. P. Mandic

A complex-valued real-time recurrent learning (CRTRL) algorithm for the class of nonlinear adaptive filters realized as fully connected recurrent neural networks is introduced. The proposed CRTRL is derived for a general complex activation function of a neuron, which makes it suitable for nonlinear adaptive filtering of complex-valued nonlinear and nonstationary signals and complex signals with strong component correlations. In addition, this algorithm is generic and represents a natural extension of the real-valued RTRL. Simulations on benchmark and real-world complex-valued signals support the approach.


2005 ◽  
Vol 15 (01n02) ◽  
pp. 129-135 ◽  
Author(s):  
MITSUO YOSHIDA ◽  
YASUAKI KUROE ◽  
TAKEHIRO MORI

Recently models of neural networks that can directly deal with complex numbers, complex-valued neural networks, have been proposed and several studies on their abilities of information processing have been done. Furthermore models of neural networks that can deal with quaternion numbers, which is the extension of complex numbers, have also been proposed. However they are all multilayer quaternion neural networks. This paper proposes models of fully connected recurrent quaternion neural networks, Hopfield-type quaternion neural networks. Since quaternion numbers are non-commutative on multiplication, some different models can be considered. We investigate dynamics of these proposed models from the point of view of the existence of an energy function and derive their conditions for existence.


2020 ◽  
Vol 32 (11) ◽  
pp. 2237-2248
Author(s):  
Masaki Kobayashi

A complex-valued Hopfield neural network (CHNN) with a multistate activation function is a multistate model of neural associative memory. The weight parameters need a lot of memory resources. Twin-multistate activation functions were introduced to quaternion- and bicomplex-valued Hopfield neural networks. Since their architectures are much more complicated than that of CHNN, the architecture should be simplified. In this work, the number of weight parameters is reduced by bicomplex projection rule for CHNNs, which is given by the decomposition of bicomplex-valued Hopfield neural networks. Computer simulations support that the noise tolerance of CHNN with a bicomplex projection rule is equal to or even better than that of quaternion- and bicomplex-valued Hopfield neural networks. By computer simulations, we find that the projection rule for hyperbolic-valued Hopfield neural networks in synchronous mode maintains a high noise tolerance.


Author(s):  
Anjar Wanto ◽  
Agus Perdana Windarto ◽  
Dedy Hartama ◽  
Iin Parlina

Artificial Neural Network (ANN) is often used to solve forecasting cases. As in this study. The artificial neural network used is with backpropagation algorithm. The study focused on cases concerning overcrowding forecasting based District in Simalungun in Indonesia in 2010-2015. The data source comes from the Central Bureau of Statistics of Simalungun Regency. The population density forecasting its future will be processed using backpropagation algorithm focused on binary sigmoid function (logsig) and a linear function of identity (purelin) with 5 network architecture model used the 3-5-1, 3-10-1, 3-5 -10-1, 3-5-15-1 and 3-10-15-1. Results from 5 to architectural models using Neural Networks Backpropagation with binary sigmoid function and identity functions vary greatly, but the best is 3-5-1 models with an accuracy of 94%, MSE, and the epoch 0.0025448 6843 iterations. Thus, the use of binary sigmoid activation function (logsig) and the identity function (purelin) on Backpropagation Neural Networks for forecasting the population density is very good, as evidenced by the high accuracy results achieved.


2021 ◽  
pp. 1-15
Author(s):  
Masaki Kobayashi

Abstract A complex-valued Hopfield neural network (CHNN) is a multistate Hopfield model. A quaternion-valued Hopfield neural network (QHNN) with a twin-multistate activation function was proposed to reduce the number of weight parameters of CHNN. Dual connections (DCs) are introduced to the QHNNs to improve the noise tolerance. The DCs take advantage of the noncommutativity of quaternions and consist of two weights between neurons. A QHNN with DCs provides much better noise tolerance than a CHNN. Although a CHNN and a QHNN with DCs have the samenumber of weight parameters, the storage capacity of projection rule for QHNNs with DCs is half of that for CHNNs and equals that of conventional QHNNs. The small storage capacity of QHNNs with DCs is caused by projection rule, not the architecture. In this work, the ebbian rule is introduced and proved by stochastic analysis that the storage capacity of a QHNN with DCs is 0.8 times as many as that of a CHNN.


2009 ◽  
Vol 2009 ◽  
pp. 1-16 ◽  
Author(s):  
Huisheng Zhang ◽  
Chao Zhang ◽  
Wei Wu

The batch split-complex backpropagation (BSCBP) algorithm for training complex-valued neural networks is considered. For constant learning rate, it is proved that the error function of BSCBP algorithm is monotone during the training iteration process, and the gradient of the error function tends to zero. By adding a moderate condition, the weights sequence itself is also proved to be convergent. A numerical example is given to support the theoretical analysis.


Author(s):  
Tohru Nitta

The usual real-valued artificial neural networks have been applied to various fields such as telecommunications, robotics, bioinformatics, image processing and speech recognition, in which complex numbers (two dimensions) are often used with the Fourier transformation. This indicates the usefulness of complex-valued neural networks whose input and output signals and parameters such as weights and thresholds are all complex numbers, which are an extension of the usual real-valued neural networks. In addition, in the human brain, an action potential may have different pulse patterns, and the distance between pulses may be different. This suggests that it is appropriate to introduce complex numbers representing phase and amplitude into neural networks. Aizenberg, Ivaskiv, Pospelov and Hudiakov (1971) (former Soviet Union) proposed a complex-valued neuron model for the first time, and although it was only available in Russian literature, their work can now be read in English (Aizenberg, Aizenberg & Vandewalle, 2000). Prior to that time, most researchers other than Russians had assumed that the first persons to propose a complex-valued neuron were Widrow, McCool and Ball (1975). Interest in the field of neural networks started to grow around 1990, and various types of complex- valued neural network models were subsequently proposed. Since then, their characteristics have been researched, making it possible to solve some problems which could not be solved with the real-valued neuron, and to solve many complicated problems more simply and efficiently.


2005 ◽  
Vol 15 (06) ◽  
pp. 435-443 ◽  
Author(s):  
XIAOMING CHEN ◽  
ZHENG TANG ◽  
CATHERINE VARIAPPAN ◽  
SONGSONG LI ◽  
TOSHIMI OKADA

The complex-valued backpropagation algorithm has been widely used in fields of dealing with telecommunications, speech recognition and image processing with Fourier transformation. However, the local minima problem usually occurs in the process of learning. To solve this problem and to speed up the learning process, we propose a modified error function by adding a term to the conventional error function, which is corresponding to the hidden layer error. The simulation results show that the proposed algorithm is capable of preventing the learning from sticking into the local minima and of speeding up the learning.


Sign in / Sign up

Export Citation Format

Share Document