scholarly journals Quantitative Estimates for Positive Linear Operators in terms of the Usual Second Modulus

2015 ◽  
Vol 2015 ◽  
pp. 1-11 ◽  
Author(s):  
José A. Adell ◽  
A. Lekuona

We give accurate estimates of the constantsCn(A(I),x)appearing in direct inequalities of the form|Lnf(x)-f(x)|≤Cn(A(I),x)ω2  f;σ(x)/n,f∈A(I),x∈I,  and  n=1,2,…,whereLnis a positive linear operator reproducing linear functions and acting on real functionsfdefined on the intervalI,A(I)is a certain subset of such functions,ω2(f;·)is the usual second modulus off, andσ(x)is an appropriate weight function. We show that the size of the constantsCn(A(I),x)mainly depends on the degree of smoothness of the functions in the setA(I)and on the distance from the pointxto the boundary ofI. We give a closed form expression for the best constant whenA(I)is a certain set of continuous piecewise linear functions. As illustrative examples, the Szàsz-Mirakyan operators and the Bernstein polynomials are discussed.

Author(s):  
Arturo Sarmiento-Reyes ◽  
Luis Hernandez-Martinez ◽  
Miguel Angel Gutierrez de Anda ◽  
Francisco Javier Castro Gonzalez

We describe a sense in which mesh duality is equivalent to Legendre duality. That is, a general pair of meshes, which satisfy a definition of duality for meshes, are shown to be the projection of a pair of piecewise linear functions that are dual to each other in the sense of a Legendre dual transformation. In applications the latter functions can be a tangent plane approximation to a smoother function, and a chordal plane approximation to its Legendre dual. Convex examples include one from meteorology, and also the relation between the Delaunay mesh and the Voronoi tessellation. The latter are shown to be the projections of tangent plane and chordal approximations to the same paraboloid.


Algorithms ◽  
2020 ◽  
Vol 13 (7) ◽  
pp. 166 ◽  
Author(s):  
Andreas Griewank ◽  
Andrea Walther

For piecewise linear functions f : R n ↦ R we show how their abs-linear representation can be extended to yield simultaneously their decomposition into a convex f ˇ and a concave part f ^ , including a pair of generalized gradients g ˇ ∈ R n ∋ g ^ . The latter satisfy strict chain rules and can be computed in the reverse mode of algorithmic differentiation, at a small multiple of the cost of evaluating f itself. It is shown how f ˇ and f ^ can be expressed as a single maximum and a single minimum of affine functions, respectively. The two subgradients g ˇ and − g ^ are then used to drive DCA algorithms, where the (convex) inner problem can be solved in finitely many steps, e.g., by a Simplex variant or the true steepest descent method. Using a reflection technique to update the gradients of the concave part, one can ensure finite convergence to a local minimizer of f, provided the Linear Independence Kink Qualification holds. For piecewise smooth objectives the approach can be used as an inner method for successive piecewise linearization.


Sign in / Sign up

Export Citation Format

Share Document