arbitrary continuous function
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 6)

H-INDEX

4
(FIVE YEARS 1)

2021 ◽  
Vol 13 (3) ◽  
pp. 75-121
Author(s):  
Андрей Владимирович Чернов ◽  
Andrey Chernov

The subject of the paper is finite-dimensional concave games id est noncooperative $n$-person games with objective functionals concave with respect to `their own' variables. For such games we investigate the problem of designing iterative algorithms for searching the Nash equilibrium with convergence guaranteed without requirements concerning objective functionals such as smoothness and as convexity in `strange' variables or another similar hypotheses (in the sense of weak convexity, quasiconvexity and so on). In fact, we prove some assertion analogous to the theorem on convergence of $M$-Fej\'{er iterative process for the case when an operator acts in a finite-dimensional compact and nearness to an objective set is measured with the help of arbitrary continuous function. Then, on the base of this assertion we generalize and develop the approach suggested by the author formerly to searching the Nash equilibrium in concave games. The last one can be regarded as "a cross between" the relaxation algorithm and the Hooke-Jeeves method of configurations (but taking into account a specific character of the the residual function being minimized). Moreover, we present results of numerical experiments with their discussion.


2021 ◽  
pp. 1-32
Author(s):  
Zuowei Shen ◽  
Haizhao Yang ◽  
Shijun Zhang

A new network with super-approximation power is introduced. This network is built with Floor ([Formula: see text]) or ReLU ([Formula: see text]) activation function in each neuron; hence, we call such networks Floor-ReLU networks. For any hyperparameters [Formula: see text] and [Formula: see text], we show that Floor-ReLU networks with width [Formula: see text] and depth [Formula: see text] can uniformly approximate a Hölder function [Formula: see text] on [Formula: see text] with an approximation error [Formula: see text], where [Formula: see text] and [Formula: see text] are the Hölder order and constant, respectively. More generally for an arbitrary continuous function [Formula: see text] on [Formula: see text] with a modulus of continuity [Formula: see text], the constructive approximation rate is [Formula: see text]. As a consequence, this new class of networks overcomes the curse of dimensionality in approximation power when the variation of [Formula: see text] as [Formula: see text] is moderate (e.g., [Formula: see text] for Hölder continuous functions), since the major term to be considered in our approximation rate is essentially [Formula: see text] times a function of [Formula: see text] and [Formula: see text] independent of [Formula: see text] within the modulus of continuity.


Author(s):  
E.V. Kuliev ◽  
N.V. Grigorieva ◽  
M.A. Dovgalev

This article is about prediction using neural networks. Neural networks are used to solve problems that require analytical calculations similar to those carried out by the human brain. Inherently nonlinear neural networks allow to approximate an arbitrary continuous function with any degree of accuracy, regardless of the absence or presence of any periodicity or cyclicality. Today, neural networks are one of the most powerful forecasting mechanisms. This article discusses the General principles of training and operation of the neural network, the life cycle, the solution of forecasting problems using the approximation of the function.


2019 ◽  
Vol 218 (3) ◽  
pp. 2150-2164 ◽  
Author(s):  
Santiago R Soler ◽  
Agustina Pesce ◽  
Mario E Gimenez ◽  
Leonardo Uieda

SUMMARY We present a new methodology to compute the gravitational fields generated by tesseroids (spherical prisms) whose density varies with depth according to an arbitrary continuous function. It approximates the gravitational fields through the Gauss–Legendre Quadrature along with two discretization algorithms that automatically control its accuracy by adaptively dividing the tesseroid into smaller ones. The first one is a preexisting 2-D adaptive discretization algorithm that reduces the errors due to the distance between the tesseroid and the computation point. The second is a new density-based discretization algorithm that decreases the errors introduced by the variation of the density function with depth. The amount of divisions made by each algorithm is indirectly controlled by two parameters: the distance-size ratio and the delta ratio. We have obtained analytical solutions for a spherical shell with radially variable density and compared them to the results of the numerical model for linear, exponential, and sinusoidal density functions. The heavily oscillating density functions are intended only to test the algorithm to its limits and not to emulate a real world case. These comparisons allowed us to obtain optimal values for the distance-size and delta ratios that yield an accuracy of 0.1 per cent of the analytical solutions. The resulting optimal values of distance-size ratio for the gravitational potential and its gradient are 1 and 2.5, respectively. The density-based discretization algorithm produces no discretizations in the linear density case, but a delta ratio of 0.1 is needed for the exponential and most sinusoidal density functions. These values can be extrapolated to cover most common use cases, which are simpler than oscillating density profiles. However, the distance-size and delta ratios can be configured by the user to increase the accuracy of the results at the expense of computational speed. Finally, we apply this new methodology to model the Neuquén Basin, a foreland basin in Argentina with a maximum depth of over 5000 m, using an exponential density function.


In the article we obtained sufficient conditions of the existence of the nonlinear Noetherian boundary value problem solution for the system of differential-algebraic equations which are widely used in mechanics, economics, electrical engineering, and control theory. We studied the case of the nondegenerate system of differential algebraic equations, namely: the differential algebraic system that is solvable relatively to the derivative. In this case, the nonlinear system of differential algebraic equations is reduced to the system of ordinary differential equations with an arbitrary continuous function. The studied nonlinear differential-algebraic boundary-value problem in the article generalizes the numerous statements of the non-linear non-Gath boundary value problems considered in the monographs of А.М. Samoilenko, E.A. Grebenikov, Yu.A. Ryabov, A.A. Boichuk and S.M. Chuiko, and the obtained results can be carried over matrix boundary value problems for differential-algebraic systems. The obtained results in the article of the study of differential-algebraic boundary value problems, in contrast to the works of S. Kempbell, V.F. Boyarintsev, V.F. Chistyakov, A.M. Samoilenko and A.A. Boychuk, do not involve the use of the central canonical form, as well as perfect pairs and triples of matrices. To construct solutions of the considered boundary value problem, we proposed the iterative scheme using the method of simple iterations. The proposed solvability conditions and the scheme for finding solutions of the nonlinear Noetherian differential-algebraic boundary value problem, were illustrated with an example. To assess the accuracy of the found approximations to the solution of the nonlinear differential-algebraic boundary value problem, we found the residuals of the obtained approximations in the original equation. We also note that obtained approximations to the solution of the nonlinear differential-algebraic boundary value problem exactly satisfy the boundary condition.


2016 ◽  
Vol 28 (7) ◽  
pp. 1289-1304 ◽  
Author(s):  
Namig J. Guliyev ◽  
Vugar E. Ismailov

The possibility of approximating a continuous function on a compact subset of the real line by a feedforward single hidden layer neural network with a sigmoidal activation function has been studied in many papers. Such networks can approximate an arbitrary continuous function provided that an unlimited number of neurons in a hidden layer is permitted. In this note, we consider constructive approximation on any finite interval of [Formula: see text] by neural networks with only one neuron in the hidden layer. We construct algorithmically a smooth, sigmoidal, almost monotone activation function [Formula: see text] providing approximation to an arbitrary continuous function within any degree of accuracy. This algorithm is implemented in a computer program, which computes the value of [Formula: see text] at any reasonable point of the real axis.


2013 ◽  
Vol 13 (4) ◽  
Author(s):  
Kaizhi Wang ◽  
Yong Li

AbstractThis paper contributes several results on weak KAM theory for time-periodic Tonelli Lagrangian systems. Wang and Yan [Commun. Math. Phys. 309 (2012), 663-691] introduced a new kind of Lax-Oleinik type operator associated with any time-periodic Tonelli Lagrangian. Firstly, using the new operator we give an equivalent definition of the backward weak KAM solution. Then we prove a result on the asymptotic behavior of the new operators with an arbitrary continuous function as initial condition, by taking advantage of the definition mentioned above. Finally, for a specific class of time-periodic Tonelli Lagrangians, we discuss the rate of convergence of the new operators.


1972 ◽  
Vol 13 (1) ◽  
pp. 29-38 ◽  
Author(s):  
H. C. Finlayson

This paper deals with the following problem: Can an arbitrary continuous function on [0, 1], which vanishes at the origin, be represented in some sense as a series of constant multiples of indefinite integrals of a complete orthonormal set of functions on [0,1]? Four contexts in which this problem arises naturally will be given in the introduction and the remainder of the paper will be devoted to giving a partial answer to the specific problem formulated in one of these contexts.


Sign in / Sign up

Export Citation Format

Share Document