On the pseudo-monotonicity of generalized gradients of nonconvex functions

1992 ◽  
Vol 47 (1-4) ◽  
pp. 151-172 ◽  
Author(s):  
Zdzisław Naniewicz
1985 ◽  
Vol 37 (6) ◽  
pp. 1074-1084 ◽  
Author(s):  
Jay S. Treiman

In the study of optimization problems it is necessary to consider functions that are not differentiable. This has led to the consideration of generalized gradients and a corresponding calculus for certain classes of functions. Rockafellar [16] and others have developed a very strong and elegant theory of subgradients for convex functions. This convex theory gives point-wise criteria for the existence of extrema in optimization problems.There are however many optimization problems that involve functions which are neither differentiable nor convex. Such functions arise in many settings including optimal value functions [15]. In order to deal with such problems Clarke [3] defined a type of subgradient for nonconvex functions. This definition was initially for Lipschitz functions on R”. Clarke extended this definition to include lower semicontinuous (l.s.c.) functions on Banach spaces through the use of a directional derivative, the distance function from a closed set and tangent and normal cones to closed sets.


1980 ◽  
Vol 32 (2) ◽  
pp. 257-280 ◽  
Author(s):  
R. T. Rockafellar

Studies of optimization problems and certain kinds of differential equations have led in recent years to the development of a generalized theory of differentiation quite distinct in spirit and range of application from the one based on L. Schwartz's “distributions.” This theory associates with an extended-real-valued function ƒ on a linear topological space E and a point x ∈ E certain elements of the dual space E* called subgradients or generalized gradients of ƒ at x. These form a set ∂ƒ(x) that is always convex and weak*-closed (possibly empty). The multifunction ∂ƒ: x →∂ƒ(x) is the sub differential of ƒ.Rules that relate ∂ƒ to generalized directional derivatives of ƒ, or allow ∂ƒ to be expressed or estimated in terms of the subdifferentials of other functions (whenƒ = ƒ1 + ƒ2,ƒ = g o A, etc.), comprise the sub differential calculus.


2021 ◽  
Vol 5 (3) ◽  
pp. 80
Author(s):  
Hari Mohan Srivastava ◽  
Artion Kashuri ◽  
Pshtiwan Othman Mohammed ◽  
Dumitru Baleanu ◽  
Y. S. Hamed

In this paper, the authors define a new generic class of functions involving a certain modified Fox–Wright function. A useful identity using fractional integrals and this modified Fox–Wright function with two parameters is also found. Applying this as an auxiliary result, we establish some Hermite–Hadamard-type integral inequalities by using the above-mentioned class of functions. Some special cases are derived with relevant details. Moreover, in order to show the efficiency of our main results, an application for error estimation is obtained as well.


Author(s):  
Sjur Didrik Flåm

AbstractBy the first welfare theorem, competitive market equilibria belong to the core and hence are Pareto optimal. Letting money be a commodity, this paper turns these two inclusions around. More precisely, by generalizing the second welfare theorem we show that the said solutions may coincide as a common fixed point for one and the same system.Mathematical arguments invoke conjugation, convolution, and generalized gradients. Convexity is merely needed via subdifferentiablity of aggregate “cost”, and at one point only.Economic arguments hinge on idealized market mechanisms. Construed as algorithms, each stops, and a steady state prevails if and only if price-taking markets clear and value added is nil.


Author(s):  
Jose Carrillo ◽  
Shi Jin ◽  
Lei Li ◽  
Yuhua Zhu

We improve recently introduced consensus-based optimization method, proposed in [R. Pinnau, C. Totzeck, O. Tse and S. Martin, Math. Models Methods Appl. Sci., 27(01):183{204, 2017], which is a gradient-free optimization method for general nonconvex functions. We rst replace the isotropic geometric Brownian motion by the component-wise one, thus removing the dimensionality dependence of the drift rate, making the method more competitive for high dimensional optimization problems. Secondly, we utilize the random mini-batch ideas to reduce the computational cost of calculating the weighted average which the individual particles tend to relax toward. For its mean- eld limit{a nonlinear Fokker-Planck equation{we prove, in both time continuous and semi-discrete settings, that the convergence of the method, which is exponential in time, is guaranteed with parameter constraints independent of the dimensionality. We also conduct numerical tests to high dimensional problems to check the success rate of the method.


Sign in / Sign up

Export Citation Format

Share Document