scholarly journals Smoothing approximations to nonsmooth optimization problems

Author(s):  
X.Q. Yang

AbstractWe study certain types of composite nonsmooth minimization problems by introducing a general smooth approximation method. Under various conditions we derive bounds on error estimates of the functional values of original objective function at an approximate optimal solution and at the optimal solution. Finally, we obtain second-order necessary optimality conditions for the smooth approximation prob lems using a recently introduced generalized second-order directional derivative.

Author(s):  
Gabriel Ruiz-Garzón ◽  
Jaime Ruiz-Zapatero ◽  
Rafaela Osuna-Gómez ◽  
Antonio Rufián-Lizana

This work is intended to lead a study of necessary and sufficient optimality conditions for scalar optimization problems on Hadamard manifolds. In the context of this geometry, we obtain and present new function types characterized by the property of having all their second-order stationary points to be global minimums. In order to do so, we extend the concept convexity in Euclidean space to a more general notion of invexity on Hadamard manifolds. This is done employing notions of second-order directional derivative, second-order pseudoinvexity functions and the second-order Karush-Kuhn-Tucker-pseudoinvexity problem. Thus, we prove that every second-order stationary point is a global minimum if and only if the problem is either second-order pseudoinvex or second-order KKT-pseudoinvex depending on whether the problem regards unconstrained or constrained scalar optimization respectively. This result has not been presented in the literature before. Finally, examples of these new characterizations are provided in the context of \textit{"Higgs Boson like"} potentials among others.


2005 ◽  
Vol 15 (2) ◽  
pp. 301-306 ◽  
Author(s):  
Nada Djuranovic-Milicic

In this paper an algorithm for LC1 unconstrained optimization problems, which uses the second order Dini upper directional derivative is considered. The purpose of the paper is to establish general algorithm hypotheses under which convergence occurs to optimal points. A convergence proof is given, as well as an estimate of the rate of convergence.


2018 ◽  
Vol 52 (2) ◽  
pp. 567-575 ◽  
Author(s):  
Do Sang Kim ◽  
Nguyen Van Tuyen

The aim of this note is to present some second-order Karush–Kuhn–Tucker necessary optimality conditions for vector optimization problems, which modify the incorrect result in ((10), Thm. 3.2).


Sign in / Sign up

Export Citation Format

Share Document