MULTILAYER PERCEPTRONS TO APPROXIMATE COMPLEX VALUED FUNCTIONS

1995 ◽  
Vol 06 (04) ◽  
pp. 435-446 ◽  
Author(s):  
P. ARENA ◽  
L. FORTUNA ◽  
R. RE ◽  
M.G. XIBILIA

In this paper the approximation capabilities of different structures of complex feedforward neural networks, reported in the literature, have been theoretically analyzed. In particular a new density theorem for Complex Multilayer Perceptrons with complex valued non-analytic sigmoidal activation functions has been proven. Such a result makes Multilayer Perceptrons with complex valued neurons universal interpolators of continuous complex valued functions. Moreover the approximation properties of superpositions of analytic activation functions have been investigated, proving that such combinations are not dense in the set of continuous complex valued functions. Several numerical examples have also been reported in order to show the advantages introduced by Complex Multilayer Perceptrons in terms of computational complexity with respect to the classical real MLP.

2020 ◽  
Vol 2020 ◽  
pp. 1-16
Author(s):  
Wenbo Zhou ◽  
Biwen Li ◽  
Jin-E Zhang

This paper concentrates on global exponential stability and synchronization for complex-valued neural networks (CVNNs) with deviating argument by matrix measure approach. The Lyapunov function is no longer required, and some sufficient conditions are firstly obtained to ascertain the addressed system to be exponentially stable under different activation functions. Moreover, after designing a suitable controller, the synchronization of two complex-valued coupled neural networks is realized, and the derived condition is easy to be confirmed. Finally, some numerical examples are given to demonstrate the superiority and feasibility of the presented theoretical analysis and results.


1994 ◽  
Vol 03 (03) ◽  
pp. 339-348
Author(s):  
CARL G. LOONEY

We review methods and techniques for training feedforward neural networks that avoid problematic behavior, accelerate the convergence, and verify the training. Adaptive step gain, bipolar activation functions, and conjugate gradients are powerful stabilizers. Random search techniques circumvent the local minimum trap and avoid specialization due to overtraining. Testing assures quality learning.


2021 ◽  
Vol 26 (jai2021.26(1)) ◽  
pp. 32-41
Author(s):  
Bodyanskiy Y ◽  
◽  
Antonenko T ◽  

Modern approaches in deep neural networks have a number of issues related to the learning process and computational costs. This article considers the architecture grounded on an alternative approach to the basic unit of the neural network. This approach achieves optimization in the calculations and gives rise to an alternative way to solve the problems of the vanishing and exploding gradient. The main issue of the article is the usage of the deep stacked neo-fuzzy system, which uses a generalized neo-fuzzy neuron to optimize the learning process. This approach is non-standard from a theoretical point of view, so the paper presents the necessary mathematical calculations and describes all the intricacies of using this architecture from a practical point of view. From a theoretical point, the network learning process is fully disclosed. Derived all necessary calculations for the use of the backpropagation algorithm for network training. A feature of the network is the rapid calculation of the derivative for the activation functions of neurons. This is achieved through the use of fuzzy membership functions. The paper shows that the derivative of such function is a constant, and this is a reason for the statement of increasing in the optimization rate in comparison with neural networks which use neurons with more common activation functions (ReLU, sigmoid). The paper highlights the main points that can be improved in further theoretical developments on this topic. In general, these issues are related to the calculation of the activation function. The proposed methods cope with these points and allow approximation using the network, but the authors already have theoretical justifications for improving the speed and approximation properties of the network. The results of the comparison of the proposed network with standard neural network architectures are shown


Author(s):  
Simone Scardapane ◽  
Steven Van Vaerenbergh ◽  
Amir Hussain ◽  
Aurelio Uncini

2019 ◽  
Vol 50 (1) ◽  
pp. 121-147 ◽  
Author(s):  
Ezequiel López-Rubio ◽  
Francisco Ortega-Zamorano ◽  
Enrique Domínguez ◽  
José Muñoz-Pérez

Sign in / Sign up

Export Citation Format

Share Document