Finding Second-Order Stationary Points in Constrained Minimization: A Feasible Direction Approach

2020 ◽  
Vol 186 (2) ◽  
pp. 480-503
Author(s):  
Nadav Hallak ◽  
Marc Teboulle
2018 ◽  
Vol 34 (6) ◽  
pp. 1322-1341 ◽  
Author(s):  
Alfredo Canelas ◽  
Miguel Carrasco ◽  
Julio López

Author(s):  
Gabriel Ruiz-Garzón ◽  
Jaime Ruiz-Zapatero ◽  
Rafaela Osuna-Gómez ◽  
Antonio Rufián-Lizana

This work is intended to lead a study of necessary and sufficient optimality conditions for scalar optimization problems on Hadamard manifolds. In the context of this geometry, we obtain and present new function types characterized by the property of having all their second-order stationary points to be global minimums. In order to do so, we extend the concept convexity in Euclidean space to a more general notion of invexity on Hadamard manifolds. This is done employing notions of second-order directional derivative, second-order pseudoinvexity functions and the second-order Karush-Kuhn-Tucker-pseudoinvexity problem. Thus, we prove that every second-order stationary point is a global minimum if and only if the problem is either second-order pseudoinvex or second-order KKT-pseudoinvex depending on whether the problem regards unconstrained or constrained scalar optimization respectively. This result has not been presented in the literature before. Finally, examples of these new characterizations are provided in the context of \textit{"Higgs Boson like"} potentials among others.


2013 ◽  
Vol 29 (5) ◽  
pp. 900-918 ◽  
Author(s):  
Mark A. Abramson ◽  
Lennart Frimannslund ◽  
Trond Steihaug

2008 ◽  
Vol 35 (1-3) ◽  
pp. 215-234 ◽  
Author(s):  
Vlado Lubarda

An analysis of the Gibbs conditions of stable thermodynamic equilibrium based on the constrained minimization of the four fundamental thermodynamic potentials, is presented with a particular attention given to the previously unexplored connections between the second-order variations of thermodynamic potentials. These connections are used to establish the convexity properties of all potentials in relation to each other, which systematically deliver thermodynamic relationships between the specific heats, and the isentropic and isothermal bulk moduli and compressibilities. The comparison with the classical derivation is then given.


2020 ◽  
Author(s):  
Gerardo Raggi ◽  
Ignacio Fernández Galván ◽  
Christian L. Ritterhoff ◽  
Morgane Vacher ◽  
Roland Lindh

Machine learning techniques, specifically gradient-enhanced Kriging (GEK), have been implemented for molecular geometry optimization. GEK-based optimization has many advantages compared to conventional - step-restricted second-order truncated expansion - molecular optimization methods. In particular, the surrogate model given by GEK can have multiple stationary points, will smoothly converge to the exact model as the number of sample points increases, and contains an explicit expression for the expected error of the model function at an arbitrary point. Machine learning is, however, associated with abundance of data, contrary to the situation desired for efficient geometry optimizations. In the paper we demonstrate how the GEK procedure can be utilized in a fashion such that in the presence of few data points, the surrogate surface will in a robust way guide the optimization to a minimum of a potential energy surface. In this respect the GEK procedure will be used to mimic the behavior of a conventional second-order scheme, but retaining the flexibility of the superior machine learning approach. Moreover, the expected error will be used in the optimization to facilitate restricted-variance optimizations (RVO). A procedure which relates the eigenvalues of the approximate guessed Hessian with the individual characteristic lengths, used in the GEK model, reduces the number of empirical parameters to optimize to two - the value of the trend function and the maximum allowed variance. These parameters are determined using the extended Baker (e-Baker) and part of the Baker transition-state (Baker-TS) test suites as a training set. The so-created optimization procedure is tested using the e-Baker, the full Baker-TS, and the S22 test suites, at the density-functional-theory and second order Møller-Plesset levels of approximation. The results show that the new method is generally of similar or better performance than a state-of-the-art conventional method, even for cases where no significant improvement was expected.


2020 ◽  
Author(s):  
Gerardo Raggi ◽  
Christian L. Ritterhoff ◽  
Ignacio Fernández Galván ◽  
Morgane Vacher ◽  
Roland Lindh

Machine learning techniques, specifically gradient-enhanced Kriging (GEK), have been implemented for molecular geometry optimization. GEK-based optimization has many advantages compared to conventional - step-restricted second-order truncated expansion - molecular optimization methods. In particular, the surrogate model given by GEK can have multiple stationary points, will smoothly converge to the exact model as the number of sample points increases, and contains an explicit expression for the expected error of the model function at an arbitrary point. Machine learning is, however, associated with abundance of data, contrary to the situation desired for efficient geometry optimizations. In the paper we demonstrate how the GEK procedure can be utilized in a fashion such that in the presence of few data points, the surrogate surface will in a robust way guide the optimization to a minimum of a potential energy surface. In this respect the GEK procedure will be used to mimic the behavior of a conventional second-order scheme, but retaining the flexibility of the superior machine learning approach. Moreover, the expected error will be used in the optimization to facilitate restricted-variance optimizations (RVO). A procedure which relates the eigenvalues of the approximate guessed Hessian with the individual characteristic lengths, used in the GEK model, reduces the number of empirical parameters to optimize to two - the value of the trend function and the maximum allowed variance. These parameters are determined using the extended Baker (e-Baker) and part of the Baker transition-state (Baker-TS) test suites as a training set. The so-created optimization procedure is tested using the e-Baker, the full Baker-TS, and the S22 test suites, at the density-functional-theory and second order Møller-Plesset levels of approximation. The results show that the new method is generally of similar or better performance than a state-of-the-art conventional method, even for cases where no significant improvement was expected.


Sign in / Sign up

Export Citation Format

Share Document