ridge functions
Recently Published Documents


TOTAL DOCUMENTS

83
(FIVE YEARS 19)

H-INDEX

14
(FIVE YEARS 1)

Author(s):  
R. A. Aliev ◽  
A. A. Asgarova ◽  
V. E. Ismailov
Keyword(s):  

2021 ◽  
Vol 73 (5) ◽  
pp. 579-588
Author(s):  
R. A. Aliev ◽  
A. A. Asgarova ◽  
V. E. Ismailov

UDC 517.5 We consider the problem of representation of a bivariate function by sums of ridge functions. It is shown that if a function of a certain smoothness class is represented by a sum of finitely many arbitrarily behaved ridge functions, then it can also be represented by a sum of ridge functions of the same smoothness class. As an example, this result is applied to a homogeneous constant coefficient partial differential equation.


Author(s):  
Massimo Fornasier ◽  
Jan Vybíral ◽  
Ingrid Daubechies

Abstract We address the structure identification and the uniform approximation of sums of ridge functions $f(x)=\sum _{i=1}^m g_i(\langle a_i,x\rangle )$ on ${\mathbb{R}}^d$, representing a general form of a shallow feed-forward neural network, from a small number of query samples. Higher order differentiation, as used in our constructive approximations, of sums of ridge functions or of their compositions, as in deeper neural network, yields a natural connection between neural network weight identification and tensor product decomposition identification. In the case of the shallowest feed-forward neural network, second-order differentiation and tensors of order two (i.e., matrices) suffice as we prove in this paper. We use two sampling schemes to perform approximate differentiation—active sampling, where the sampling points are universal, actively and randomly designed, and passive sampling, where sampling points were preselected at random from a distribution with known density. Based on multiple gathered approximated first- and second-order differentials, our general approximation strategy is developed as a sequence of algorithms to perform individual sub-tasks. We first perform an active subspace search by approximating the span of the weight vectors $a_1,\dots ,a_m$. Then we use a straightforward substitution, which reduces the dimensionality of the problem from $d$ to $m$. The core of the construction is then the stable and efficient approximation of weights expressed in terms of rank-$1$ matrices $a_i \otimes a_i$, realized by formulating their individual identification as a suitable nonlinear program. We prove the successful identification by this program of weight vectors being close to orthonormal and we also show how we can constructively reduce to this case by a whitening procedure, without loss of any generality. We finally discuss the implementation and the performance of the proposed algorithmic pipeline with extensive numerical experiments, which illustrate and confirm the theoretical results.


2021 ◽  
Vol 109 (1-2) ◽  
pp. 307-311
Author(s):  
T. I. Zaitseva ◽  
Yu. V. Malykhin ◽  
K. S. Ryutin
Keyword(s):  

Author(s):  
Steffen Goebbels

AbstractSingle hidden layer feedforward neural networks can represent multivariate functions that are sums of ridge functions. These ridge functions are defined via an activation function and customizable weights. The paper deals with best non-linear approximation by such sums of ridge functions. Error bounds are presented in terms of moduli of smoothness. The main focus, however, is to prove that the bounds are best possible. To this end, counterexamples are constructed with a non-linear, quantitative extension of the uniform boundedness principle. They show sharpness with respect to Lipschitz classes for the logistic activation function and for certain piecewise polynomial activation functions. The paper is based on univariate results in Goebbels (Res Math 75(3):1–35, 2020. https://rdcu.be/b5mKH)


Sign in / Sign up

Export Citation Format

Share Document