universal approximators
Recently Published Documents


TOTAL DOCUMENTS

94
(FIVE YEARS 11)

H-INDEX

24
(FIVE YEARS 1)

2021 ◽  
Vol 7 (9) ◽  
pp. 173
Author(s):  
Eduardo Paluzo-Hidalgo ◽  
Rocio Gonzalez-Diaz ◽  
Miguel A. Gutiérrez-Naranjo ◽  
Jónathan Heras

Simplicial-map neural networks are a recent neural network architecture induced by simplicial maps defined between simplicial complexes. It has been proved that simplicial-map neural networks are universal approximators and that they can be refined to be robust to adversarial attacks. In this paper, the refinement toward robustness is optimized by reducing the number of simplices (i.e., nodes) needed. We have shown experimentally that such a refined neural network is equivalent to the original network as a classification tool but requires much less storage.


Author(s):  
Haggai Maron ◽  
Or Litany ◽  
Gal Chechik ◽  
Ethan Fetaya

Learning from unordered sets is a fundamental learning setup, recently attracting increasing attention. Research in this area has focused on the case where elements of the set are represented by feature vectors, and far less emphasis has been given to the common case where set elements themselves adhere to their own symmetries. That case is relevant to numerous applications, from deblurring image bursts to multi-view 3D shape recognition and reconstruction. In this paper, we present a principled approach to learning sets of general symmetric elements. We first characterize the space of linear layers that are equivariant both to element reordering and to the inherent symmetries of elements, like translation in the case of images. We further show that networks that are composed of these layers, called Deep Sets for Symmetric Elements layers (DSS), are universal approximators of both invariant and equivariant functions, and that these networks are strictly more expressive than Siamese networks. DSS layers are also straightforward to implement. Finally, we show that they improve over existing set-learning architectures in a series of experiments with images, graphs, and point clouds.


2020 ◽  
Vol 32 (11) ◽  
pp. 2249-2278
Author(s):  
Changcun Huang

This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function [Formula: see text], we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to [Formula: see text]. For multivariate function [Formula: see text], ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating [Formula: see text]. A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.


2020 ◽  
Vol 131 ◽  
pp. 29-36
Author(s):  
Eduardo Paluzo-Hidalgo ◽  
Rocio Gonzalez-Diaz ◽  
Miguel A. Gutiérrez-Naranjo

2019 ◽  
Vol 15 (3) ◽  
Author(s):  
B. S. Sousa ◽  
F. V. Silva ◽  
A. M. F. Fileti

AbstractThe control design of coupled tanks is not an easy task due to the nonlinear characteristic of the valves, and the interactions between the controlled variables. Those features pose a challenge in the automatic control, so that linear controllers, such as conventional PID, might not work properly for regulating this MIMO system. Some advanced control techniques (e. g. control based on neural networks) can be used since neural networks are universal approximators which can deal with nonlinearities and interactions between process variables. In the present work, an experimental investigation was performed presenting a comparison between two neural network-based techniques and testing the feasibility of these techniques in the coupled tanks system. First principles simulations helped to find suitable parameters for the controllers. The results showed that the model predictive control based on artificial neural networks presented the best performance for supervisory tests. On the other hand, the inverse neural network needed a very accurate model and small plant-model mismatches led to undesirable offsets.


2019 ◽  
Vol 17 (06) ◽  
pp. 897-930
Author(s):  
Isabel Marrero

Radial basis function neural networks (RBFNNs) of Hankel translates of order [Formula: see text] with a continuous activation function [Formula: see text] for which the limit [Formula: see text] exists are shown to possess the universal approximation property in spaces of continuous and of [Formula: see text]-integrable functions, [Formula: see text], on (compact subsets of) [Formula: see text] if, and only if, [Formula: see text] is not an even polynomial. This extends to the class of RBFNNs under consideration a result already known for RBFNNs of standard translates.


Sign in / Sign up

Export Citation Format

Share Document