scholarly journals Max-product neural network and quasi-interpolation operators activated by sigmoidal functions

2016 ◽  
Vol 209 ◽  
pp. 1-22 ◽  
Author(s):  
Danilo Costarelli ◽  
Gianluca Vinti
1998 ◽  
Vol 07 (03) ◽  
pp. 373-398
Author(s):  
TIM DRAELOS ◽  
DON HUSH

A study of the function approximation capabilities of single hidden layer neural networks strongly motivates the investigation of constructive learning techniques as a means of realizing established error bounds. Learning characteristics employed by constructive algorithms provide ideas for development of new algorithms applicable to the function approximation problem. In addition, constructive techniques offer efficient methods for network construction and weight determination. The development of a novel neural network algorithm, the Constructive Locally Fit Sigmoids (CLFS) function approximation algorithm, is presented in detail. Basis functions of global extent (piecewise linear sigmoidal functions) are locally fit to the target function, resulting in a pool of candidate hidden layer nodes from which a function approximation is obtained. This algorithm provides a methodology of selecting nodes in a meaningful way from the infinite set of possibilities and synthesizes an n node single hidden layer network with empirical and analytical results that strongly indicate an O(1/n) mean squared training error bound under certain assumptions. The algorithm operates in polynomial time in the number of network nodes and the input dimension. Empirical results demonstrate its effectiveness on several multidimensional function approximate problems relative to contemporary constructive and nonconstructive algorithms.


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Zhiyong Quan ◽  
Zhengqiu Zhang

The technique of approximate partition of unity, the way of Fourier series, and inequality technique are used to construct a neural network with two weights and with sigmoidal functions. Furthermore by using inequality technique, we prove that the neural network with two weights can more precisely approximate any nonlinear continuous function than BP neural network constructed in (Chen et al., 2012).


2013 ◽  
Vol 2013 ◽  
pp. 1-6 ◽  
Author(s):  
Fangyan Yang ◽  
Song Tang ◽  
Guilan Xu

This paper studies a small neural network with three neurons. First, the activation function takes the sign function. Although the network is a simple hybrid system with all subsystems being exponentially stable, we find that it can exhibit very complex dynamics such as limit cycles and chaos. Since the sign function is a limit case of sigmoidal functions, we find that chaos robustly exists with some different activation functions, which implies that such chaos in this network is more related to its weight matrix than the type of activation functions. For chaos, we present a rigorous computer-assisted study by virtue of topological horseshoe theory.


2021 ◽  
Vol 18 (2) ◽  
Author(s):  
Marco Cantarini ◽  
Danilo Costarelli ◽  
Gianluca Vinti

AbstractIn this paper, we study the rate of pointwise approximation for the neural network operators of the Kantorovich type. This result is obtained proving a certain asymptotic expansion for the above operators and then by establishing a Voronovskaja type formula. A central role in the above resuts is played by the truncated algebraic moments of the density functions generated by suitable sigmoidal functions. Furthermore, to improve the rate of convergence, we consider finite linear combinations of the above neural network type operators, and also in the latter case, we obtain a Voronovskaja type theorem. Finally, concrete examples of sigmoidal activation functions have been deeply discussed, together with the case of rectified linear unit (ReLu) activation function, very used in connection with deep neural networks.


Sign in / Sign up

Export Citation Format

Share Document