Generalized Gradients, Lipschitz behavior and Directional Derivatives

1985 ◽  
Vol 37 (6) ◽  
pp. 1074-1084 ◽  
Author(s):  
Jay S. Treiman

In the study of optimization problems it is necessary to consider functions that are not differentiable. This has led to the consideration of generalized gradients and a corresponding calculus for certain classes of functions. Rockafellar [16] and others have developed a very strong and elegant theory of subgradients for convex functions. This convex theory gives point-wise criteria for the existence of extrema in optimization problems.There are however many optimization problems that involve functions which are neither differentiable nor convex. Such functions arise in many settings including optimal value functions [15]. In order to deal with such problems Clarke [3] defined a type of subgradient for nonconvex functions. This definition was initially for Lipschitz functions on R”. Clarke extended this definition to include lower semicontinuous (l.s.c.) functions on Banach spaces through the use of a directional derivative, the distance function from a closed set and tangent and normal cones to closed sets.

1989 ◽  
Vol 39 (2) ◽  
pp. 233-238 ◽  
Author(s):  
Simon Fitzpatrick

We investigate the circumstances under which the distance function to a closed set in a Banach space having a one-sided directional derivative equal to 1 or −1 implies the existence of nearest points. In reflexive spaces we show that at a dense set of points outside a closed set the distance function has a directional derivative equal to 1.


1980 ◽  
Vol 32 (2) ◽  
pp. 257-280 ◽  
Author(s):  
R. T. Rockafellar

Studies of optimization problems and certain kinds of differential equations have led in recent years to the development of a generalized theory of differentiation quite distinct in spirit and range of application from the one based on L. Schwartz's “distributions.” This theory associates with an extended-real-valued function ƒ on a linear topological space E and a point x ∈ E certain elements of the dual space E* called subgradients or generalized gradients of ƒ at x. These form a set ∂ƒ(x) that is always convex and weak*-closed (possibly empty). The multifunction ∂ƒ: x →∂ƒ(x) is the sub differential of ƒ.Rules that relate ∂ƒ to generalized directional derivatives of ƒ, or allow ∂ƒ to be expressed or estimated in terms of the subdifferentials of other functions (whenƒ = ƒ1 + ƒ2,ƒ = g o A, etc.), comprise the sub differential calculus.


2011 ◽  
Vol 30 (1) ◽  
pp. 39 ◽  
Author(s):  
Bruno Galerne

The covariogram of a measurable set A ⊂ Rd is the function gA which to each y ∈ Rd associates the Lebesgue measure of A ∩ (y + A). This paper proves two formulas. The first equates the directional derivatives at the origin of gA to the directional variations of A. The second equates the average directional derivative at the origin of gA to the perimeter of A. These formulas, previously known with restrictions, are proved for any measurable set. As a by-product, it is proved that the covariogram of a set A is Lipschitz if and only if A has finite perimeter, the Lipschitz constant being half the maximal directional variation. The two formulas have counterparts for mean covariogram of random sets. They also permit to compute the expected perimeter per unit volume of any stationary random closed set. As an illustration, the expected perimeter per unit volume of stationary Boolean models having any grain distribution is computed.


Author(s):  
Kamil A. Khan ◽  
Yingwei Yuan

For any scalar-valued bivariate function that is locally Lipschitz continuous and directionally differentiable, it is shown that a subgradient may always be constructed from the function's directional derivatives in the four compass directions, arranged in a so-called "compass difference". When the original function is nonconvex, the obtained subgradient is an element of Clarke's generalized gradient, but the result appears to be novel even for convex functions. The function is not required to be represented in any particular form, and no further assumptions are required, though the result is strengthened when the function is additionally L-smooth in the sense of Nesterov. For certain optimal-value functions and certain parametric solutions of differential equation systems, these new results appear to provide the only known way to compute a subgradient. These results also imply that centered finite differences will converge to a subgradient for bivariate nonsmooth functions. As a dual result, we find that any compact convex set in two dimensions contains the midpoint of its interval hull. Examples are included for illustration, and it is demonstrated that these results do not extend directly to functions of more than two variables or sets in higher dimensions.


Author(s):  
D. T. V. An ◽  
C. Gutiérrez

AbstractThis paper focuses on formulas for the ε-subdifferential of the optimal value function of scalar and vector convex optimization problems. These formulas can be applied when the set of solutions of the problem is empty. In the scalar case, both unconstrained problems and problems with an inclusion constraint are considered. For the last ones, limiting results are derived, in such a way that no qualification conditions are required. The main mathematical tool is a limiting calculus rule for the ε-subdifferential of the sum of convex and lower semicontinuous functions defined on a (non necessarily reflexive) Banach space. In the vector case, unconstrained problems are studied and exact formulas are derived by linear scalarizations. These results are based on a concept of infimal set, the notion of cone proper set and an ε-subdifferential for convex vector functions due to Taa.


2018 ◽  
Vol 26 (6) ◽  
pp. 789-797
Author(s):  
Mikhail Y. Kokurin

Abstract We investigate the nonlinear minimization problem on a convex closed set in a Hilbert space. It is shown that the uniform conditional well-posedness of a class of problems with weakly lower semicontinuous functionals is the necessary and sufficient condition for existence of regularization procedures with accuracy estimates uniform on this class. We also establish a necessary and sufficient condition for the existence of regularizing operators which do not use information on the error level in input data. Similar results were previously known for regularization procedures of solving ill-posed inverse problems.


Author(s):  
Alain B. Zemkoho

AbstractWe consider the optimal value function of a parametric optimization problem. A large number of publications have been dedicated to the study of continuity and differentiability properties of the function. However, the differentiability aspect of works in the current literature has mostly been limited to first order analysis, with focus on estimates of its directional derivatives and subdifferentials, given that the function is typically nonsmooth. With the progress made in the last two to three decades in major subfields of optimization such as robust, minmax, semi-infinite and bilevel optimization, and their connection to the optimal value function, there is a need for a second order analysis of the generalized differentiability properties of this function. This could enable the development of robust solution algorithms, such as the Newton method. The main goal of this paper is to provide estimates of the generalized Hessian for the optimal value function. Our results are based on two handy tools from parametric optimization, namely the optimal solution and Lagrange multiplier mappings, for which completely detailed estimates of their generalized derivatives are either well-known or can easily be obtained.


2004 ◽  
Vol 56 (4) ◽  
pp. 825-842 ◽  
Author(s):  
Jean-Paul Penot

AbstractDifferentiability properties of optimal value functions associated with perturbed optimization problems require strong assumptions. We consider such a set of assumptions which does not use compactness hypothesis but which involves a kind of coherence property. Moreover, a strict differentiability property is obtained by using techniques of Ekeland and Lebourg and a result of Preiss. Such a strengthening is required in order to obtain genericity results.


Author(s):  
Andreas Heinrich Hamel ◽  
Daniela Visetti

The complete-lattice approach to optimization problems with a vector- or even set-valued objective already produced a variety of new concepts and results and was successfully applied in finance, statistics and game theory. For example, the duality issue for multi-criteria and vector optimization problems could be solved using the complete-lattice approach, compare [11]. So far, it has been applied to set-valued dynamic risk measures (in the stochastic case), as discussed in Feinstein, Rudloff etc. (see [11], for example), but it has not been applied to deterministic calculus of variations and optimal control problems. In this paper, the following problem of set-valued optimization is considered: minimize the functional $$ \overline J_t[y]=\int_0^t \overline L(s,y(s),\dot y(s))\ ds + U_0(y(0)) $$ over all admissible arcs $y$, where $\overline L$ is the associated multifunction to a vector-valued Lagrangian $L$, the integral is in the Aumann sense and $U_0$ is the initial cost. A new concept of \emph{value function}, for which a Bellman's optimality principle holds, is introduced. Also the classical result of the Hopf-Lax formula holds for the generalized value function. Finally, a derivative with respect to the time $t$ and a directional derivative with respect to $x$ of the value function are defined, based on ideas close to the concepts in [12]. The value function is proved to be solution of a suitable Hamilton-Jacobi equation.


Sign in / Sign up

Export Citation Format

Share Document