Permutations and piecewise-constant approximation of continuous functions of n variables

1998 ◽  
Vol 50 (7) ◽  
pp. 1031-1044 ◽  
Author(s):  
N. P. Korneichuk
1998 ◽  
Vol 6 ◽  
pp. 128
Author(s):  
O.V. Chernytska

We obtain upper bound of the best approximation of the classes $H^{\omega} [a, b]$ by piecewise-constant functions over uniform split in metrics of $L_{\varphi}[a, b]$ spaces, which are generated by continuous non-decreasing functions $\varphi$ that are equal to zero in zero. We study the classes of functions $\varphi$, for which the obtained bound is exact for all convex moduli of continuity.


2020 ◽  
Vol 32 (11) ◽  
pp. 2249-2278
Author(s):  
Changcun Huang

This letter proves that a ReLU network can approximate any continuous function with arbitrary precision by means of piecewise linear or constant approximations. For univariate function [Formula: see text], we use the composite of ReLUs to produce a line segment; all of the subnetworks of line segments comprise a ReLU network, which is a piecewise linear approximation to [Formula: see text]. For multivariate function [Formula: see text], ReLU networks are constructed to approximate a piecewise linear function derived from triangulation methods approximating [Formula: see text]. A neural unit called TRLU is designed by a ReLU network; the piecewise constant approximation, such as Haar wavelets, is implemented by rectifying the linear output of a ReLU network via TRLUs. New interpretations of deep layers, as well as some other results, are also presented.


Sign in / Sign up

Export Citation Format

Share Document