Performance of Deep and Shallow Neural Networks, the Universal Approximation Theorem, Activity Cliffs, and QSAR

2016 ◽  
Vol 36 (1-2) ◽  
pp. 1600118 ◽  
Author(s):  
David A. Winkler ◽  
Tu C. Le
2021 ◽  
Author(s):  
Rafael A. F. Carniello ◽  
Wington L. Vital ◽  
Marcos Eduardo Valle

The universal approximation theorem ensures that any continuous real-valued function defined on a compact subset can be approximated with arbitrary precision by a single hidden layer neural network. In this paper, we show that the universal approximation theorem also holds for tessarine-valued neural networks. Precisely, any continuous tessarine-valued function can be approximated with arbitrary precision by a single hidden layer tessarine-valued neural network with split activation functions in the hidden layer. A simple numerical example, confirming the theoretical result and revealing the superior performance of a tessarine-valued neural network over a real-valued model for interpolating a vector-valued function, is presented in the paper.


Author(s):  
VLADIK KREINOVICH ◽  
HUNG T. NGUYEN ◽  
DAVID A. SPRECHER

This paper addresses mathematical aspects of fuzzy logic. The main results obtained in this paper are: 1. the introduction of a concept of normal form in fuzzy logic using hedges; 2. using Kolmogorov’s theorem, we prove that all logical operations in fuzzy logic have normal forms; 3. for min-max operators, we obtain an approximation result similar to the universal approximation property of neural networks.


2016 ◽  
pp. 1456-1470 ◽  
Author(s):  
Saeed Panahian Fard ◽  
Zarita Zainuddin

One of the most important problems in the theory of approximation functions by means of neural networks is universal approximation capability of neural networks. In this study, we investigate the theoretical analyses of the universal approximation capability of a special class of three layer feedforward higher order neural networks based on the concept of approximate identity in the space of continuous multivariate functions. Moreover, we present theoretical analyses of the universal approximation capability of the networks in the spaces of Lebesgue integrable multivariate functions. The methods used in proving our results are based on the concepts of convolution and epsilon-net. The obtained results can be seen as an attempt towards the development of approximation theory by means of neural networks.


1994 ◽  
Vol 6 (2) ◽  
pp. 319-333 ◽  
Author(s):  
Michel Benaim

Feedforward neural networks with a single hidden layer using normalized gaussian units are studied. It is proved that such neural networks are capable of universal approximation in a satisfactory sense. Then, a hybrid learning rule as per Moody and Darken that combines unsupervised learning of hidden units and supervised learning of output units is considered. By using the method of ordinary differential equations for adaptive algorithms (ODE method) it is shown that the asymptotic properties of the learning rule may be studied in terms of an autonomous cascade of dynamical systems. Some recent results from Hirsch about cascades are used to show the asymptotic stability of the learning rule.


Sign in / Sign up

Export Citation Format

Share Document