nonsmooth functions
Recently Published Documents


TOTAL DOCUMENTS

84
(FIVE YEARS 9)

H-INDEX

22
(FIVE YEARS 2)

Author(s):  
Mahesh Chandra Mukkamala ◽  
Jalal Fadili ◽  
Peter Ochs

AbstractLipschitz continuity of the gradient mapping of a continuously differentiable function plays a crucial role in designing various optimization algorithms. However, many functions arising in practical applications such as low rank matrix factorization or deep neural network problems do not have a Lipschitz continuous gradient. This led to the development of a generalized notion known as the L-smad property, which is based on generalized proximity measures called Bregman distances. However, the L-smad property cannot handle nonsmooth functions, for example, simple nonsmooth functions like $$\vert x^4-1 \vert $$ | x 4 - 1 | and also many practical composite problems are out of scope. We fix this issue by proposing the MAP property, which generalizes the L-smad property and is also valid for a large class of structured nonconvex nonsmooth composite problems. Based on the proposed MAP property, we propose a globally convergent algorithm called Model BPG, that unifies several existing algorithms. The convergence analysis is based on a new Lyapunov function. We also numerically illustrate the superior performance of Model BPG on standard phase retrieval problems and Poisson linear inverse problems, when compared to a state of the art optimization method that is valid for generic nonconvex nonsmooth optimization problems.


Author(s):  
R.A. Khachatryan ◽  

In recent years, there has been a steadily growing interest in the study of extremal problems with parameters that do not satisfy the standard smoothness assumptions. This is due to both theoretical needs and important practical applications in economics, technology, physics, and other sciences. Rough objects naturally arise in a several areas of systems analysis, nonlinear mechanics, and control processes. In the theory of extremal problems, the main interest is the behavior of functions in the vicinity of points where a local extremum is attained. The local behavior of nonsmooth functions is described by subgradients, which are analogs of the derivative of differentiable functions. Using the concepts of subdifferential and subgradient F. Clarke proved the Lagrange multiplier rule in mathematical programming problems with constraints of the type of equalities and inequalities defined by locally Lipschitz functions. However, there are subclasses of locally Lipschitz functions, the simplest examples of which show that the necessary conditions for an extremum obtained by F. Clarke are rather crude and do not allow one to discard obviously non-optimal points. Such a subclass of nonsmooth functions is the subspace of quasi-differentiable functions. In this article, using the Eckland variational principle, we obtain the Lagrange multiplier rule in terms of quasi-differentials. It is shown by examples that this condition is stronger than the necessary condition of F. Clarke.


2020 ◽  
Vol 50 (1) ◽  
pp. 233-244 ◽  
Author(s):  
Navid Vafamand ◽  
Mohammad Hassan Asemani ◽  
Alireza Khayatiyan ◽  
Mohammad Hassan Khooban ◽  
Tomislav Dragicevic

Author(s):  
Gonglin Yuan ◽  
Tingting Li ◽  
Wujie Hu

Abstract To solve large-scale unconstrained optimization problems, a modified PRP conjugate gradient algorithm is proposed and is found to be interesting because it combines the steepest descent algorithm with the conjugate gradient method and successfully fully utilizes their excellent properties. For smooth functions, the objective algorithm sufficiently utilizes information about the gradient function and the previous direction to determine the next search direction. For nonsmooth functions, a Moreau–Yosida regularization is introduced into the proposed algorithm, which simplifies the process in addressing complex problems. The proposed algorithm has the following characteristics: (i) a sufficient descent feature as well as a trust region trait; (ii) the ability to achieve global convergence; (iii) numerical results for large-scale smooth/nonsmooth functions prove that the proposed algorithm is outstanding compared to other similar optimization methods; (iv) image restoration problems are done to turn out that the given algorithm is successful.


2019 ◽  
Vol 40 (2) ◽  
pp. 1154-1187 ◽  
Author(s):  
Frank E Curtis ◽  
Daniel P Robinson ◽  
Baoyu Zhou

Abstract An algorithm framework is proposed for minimizing nonsmooth functions. The framework is variable metric in that, in each iteration, a step is computed using a symmetric positive-definite matrix whose value is updated as in a quasi-Newton scheme. However, unlike previously proposed variable-metric algorithms for minimizing nonsmooth functions, the framework exploits self-correcting properties made possible through Broyden–Fletcher–Goldfarb–Shanno-type updating. In so doing, the framework does not overly restrict the manner in which the step computation matrices are updated, yet the scheme is controlled well enough that global convergence guarantees can be established. The results of numerical experiments for a few algorithms are presented to demonstrate the self-correcting behaviours that are guaranteed by the framework.


Sign in / Sign up

Export Citation Format

Share Document