approximation theorem
Recently Published Documents


TOTAL DOCUMENTS

499
(FIVE YEARS 63)

H-INDEX

24
(FIVE YEARS 5)

Author(s):  
Abhishek Kumar

In the present article, we dene a new kind of the modified Bernstein-Kantorovich operators defined by ¨ Ozarslan (https://doi.org/10.1080/01630563.2015.1079219) i.e. we introduce a new function ς(x) in the modified Bernstein-Kantorovich operators defined by Ozarslan with the property ({) is an infinitely differentiable function on [0; 1]; ς(0) = 0; ς(1) = 1 and ς’(x) > 0 for all x∈ [0; 1]. We substantiate an approximation theorem by using of the Bohman-Korovkins type theorem and scrutinize the rate of convergence with the aid of modulus of continuity, Lipschitz type functions for the our operators and the rate of convergence of functions by means of derivatives of bounded variation are also studied. We study an approximation theorem with the help of Bohman-Korovkins type theorem in A-Statistical convergence. Lastly, by means of a numerical example, we illustrate the convergence of these operators to certain functions through graphs with the help of MATHEMATICA and show that a careful choice of the function ς(x) leads to a better approximation results as compared to the modified Bernstein-Kantorovich operators defined by Ozarslan (https://doi.org/10.1080/01630563.2015.1079219).


2021 ◽  
Vol 58 ◽  
pp. 7-21
Author(s):  
Christian Budde

We use a version of the Trotter-Kato approximation theorem for strongly continuous semigroups in order to study ows on growing networks. For that reason we use the abstract notion of direct limits in the sense of category theory


2021 ◽  
Author(s):  
Rafael A. F. Carniello ◽  
Wington L. Vital ◽  
Marcos Eduardo Valle

The universal approximation theorem ensures that any continuous real-valued function defined on a compact subset can be approximated with arbitrary precision by a single hidden layer neural network. In this paper, we show that the universal approximation theorem also holds for tessarine-valued neural networks. Precisely, any continuous tessarine-valued function can be approximated with arbitrary precision by a single hidden layer tessarine-valued neural network with split activation functions in the hidden layer. A simple numerical example, confirming the theoretical result and revealing the superior performance of a tessarine-valued neural network over a real-valued model for interpolating a vector-valued function, is presented in the paper.


2021 ◽  
Vol 25 (2) ◽  
pp. 189-200
Author(s):  
Sevda Yildiz

In the present paper, an interesting type of convergence named ideal relative uniform convergence for double sequences of functions has been introduced for the first time. Then, the Korovkin type approximation theorem via this new type of convergence has been proved. An example to show that the new type of convergence is stronger than the convergence considered before has been given. Finally, the rate of  I2-relative uniform convergence has been computed.


2021 ◽  
Vol 71 (5) ◽  
pp. 1179-1188
Author(s):  
Chandra Prakash ◽  
Durvesh Kumar Verma ◽  
Naokant Deo

Abstract The main objective of this paper is to construct a new sequence of operators involving Apostol-Genocchi polynomials based on certain parameters. We investigate the rate of convergence of the operators given in this paper using second-order modulus of continuity and Voronovskaja type approximation theorem. Moreover, we find weighted approximation result of the given operators. Finally, we derive the Kantorovich variant of the given operators and discussed the approximation results.


Sign in / Sign up

Export Citation Format

Share Document