scholarly journals Comparative assessment of smooth and non-smooth optimization solvers in HANSO software

Author(s):  
Ali Hakan Tor

The aim of this study is to compare the performance of smooth and nonsmooth optimization solvers from HANSO (Hybrid Algorithm for Nonsmooth Optimization) software. The smooth optimization solver is the implementation of the Broyden-Fletcher-Goldfarb-Shanno (BFGS) method and the nonsmooth optimization solver is the Hybrid Algorithm for Nonsmooth Optimization. More precisely, the nonsmooth optimization algorithm is the combination of the BFGS and the Gradient Sampling Algorithm (GSA). We use well-known collection of academic test problems for nonsmooth optimization containing both convex and nonconvex problems. The motivation for this research is the importance of the comparative assessment of smooth optimization methods for solving nonsmooth optimization problems. This assessment will demonstrate how successful is the BFGS method for solving nonsmooth optimization problems in comparison with the nonsmooth optimization solver from HANSO. Performance profiles using the number iterations, the number of function evaluations and the number of subgradient evaluations are used to compare solvers.

2020 ◽  
Vol 2020 ◽  
pp. 1-9
Author(s):  
Tianshan Yang ◽  
Pengyuan Li ◽  
Xiaoliang Wang

The BFGS method is one of the most effective quasi-Newton algorithms for minimization-optimization problems. In this paper, an improved BFGS method with a modified weak Wolfe–Powell line search technique is used to solve convex minimization problems and its convergence analysis is established. Seventy-four academic test problems and the Muskingum model are implemented in the numerical experiment. The numerical results show that our algorithm is comparable to the usual BFGS algorithm in terms of the number of iterations and the time consumed, which indicates our algorithm is effective and reliable.


Author(s):  
Adel A. Younis ◽  
George H. Cheng ◽  
G. Gary Wang ◽  
Zuomin Dong

Metamodel based design optimization (MBDO) algorithms have attracted considerable interests in recent years due to their special capability in dealing with complex optimization problems with computationally expensive objective and constraint functions and local optima. Conventional unimodal-based optimization algorithms and stochastic global optimization algorithms either miss the global optimum frequently or require unacceptable computation time. In this work, a generic testbed/platform for evaluating various MBDO algorithms has been introduced. The purpose of the platform is to facilitate quantitative comparison of different MBDO algorithms using standard test problems, test procedures, and test outputs, as well as to improve the efficiency of new algorithm testing and improvement. The platform consists of a comprehensive test function database that contains about 100 benchmark functions and engineering problems. The testbed accepts any optimization algorithm to be tested, and only requires minor modifications to meet the test-bed requirements. The testbed is useful in comparing the performance of competing algorithms through execution of same problems. It allows researchers and practitioners to test and choose the most suitable optimization tool for their specific needs. It also helps to increase confidence and reliability of the newly developed MBDO tools. Many new MBDO algorithms, including Mode Pursuing Sampling (MPS), Pareto Set Pursuing (PSP), and Space Exploration and Unimodal Region Elimination (SEUMRE), were tested in this work to demonstrate its functionality and benefits.


2014 ◽  
Vol 530-531 ◽  
pp. 367-371
Author(s):  
Ting Feng Li ◽  
Yu Ting Zhang ◽  
Sheng Hui Yan

In this paper, a modified limited memory BFGS method for solving large-scale unconstrained optimization problems is proposed. A remarkable feature of the proposed method is that it possesses a global convergence property even without convexity assumption on the objective function. The implementations of the algorithm on CUTE test problems are reported, which suggest that a slight improvement has been achieved.


Algorithms ◽  
2020 ◽  
Vol 13 (4) ◽  
pp. 85 ◽  
Author(s):  
Liliya A. Demidova ◽  
Artyom V. Gorchakov

Inspired by biological systems, swarm intelligence algorithms are widely used to solve multimodal optimization problems. In this study, we consider the hybridization problem of an algorithm based on the collective behavior of fish schools. The algorithm is computationally inexpensive compared to other population-based algorithms. Accuracy of fish school search increases with the increase of predefined iteration count, but this also affects computation time required to find a suboptimal solution. We propose two hybrid approaches, intending to improve the evolutionary-inspired algorithm accuracy by using classical optimization methods, such as gradient descent and Newton’s optimization method. The study shows the effectiveness of the proposed hybrid algorithms, and the strong advantage of the hybrid algorithm based on fish school search and gradient descent. We provide a solution for the linearly inseparable exclusive disjunction problem using the developed algorithm and a perceptron with one hidden layer. To demonstrate the effectiveness of the algorithms, we visualize high dimensional loss surfaces near global extreme points. In addition, we apply the distributed version of the most effective hybrid algorithm to the hyperparameter optimization problem of a neural network.


2013 ◽  
Vol 22 (04) ◽  
pp. 1350023
Author(s):  
SYEDA DARAKHSHAN JABEEN

In this paper we develop a new hybrid algorithm incorporating the penalty function technique for solving nonlinear constrained optimization problems. The principle is based on converting the constrained optimization problem into an unconstrained optimization problem by the penalty function technique. Then, we have proposed a new penalty technique, called Big-M penalty that is different from the existing ones. Accordingly, a hybrid algorithm has been developed based on Split and Discard Strategy (SDS) and advanced real coded genetic algorithm (ARCGA), with tournament selection, multiparent whole arithmetical crossover, double mutation (boundary and whole nonuniform mutation) and elitism. In SDS technique, the entire search space is divided into two equal subregions. Then the one containing the feasible solution with better fitness value is selected. This process is repeated until the accepted subregion reduces to a very small region with negligible edges. Finally, to test the performance of the proposed method along with three different penalty function techniques, it is applied to several well-known benchmark test problems available in the literature.


2012 ◽  
Vol 18 (1) ◽  
pp. 54-66 ◽  
Author(s):  
Remigijus Paulavičius ◽  
Julius Žilinskas

Global optimization methods based on Lipschitz bounds have been analyzed and applied widely to solve various optimization problems. In this paper a bound for Lipschitz function is proposed, which is computed using function values at the vertices of a simplex and the radius of the circumscribed sphere. The efficiency of a branch and bound algorithm with proposed bound and combinations of bounds is evaluated experimentally while solving a number of multidimensional test problems for global optimization. The influence of different bounds on the performance of a branch and bound algorithm has been investigated.


2020 ◽  
Vol 2020 (1) ◽  
Author(s):  
Shashi Kant Mishra ◽  
Geetanjali Panda ◽  
Suvra Kanti Chakraborty ◽  
Mohammad Esmael Samei ◽  
Bhagwat Ram

AbstractVariants of the Newton method are very popular for solving unconstrained optimization problems. The study on global convergence of the BFGS method has also made good progress. The q-gradient reduces to its classical version when q approaches 1. In this paper, we propose a quantum-Broyden–Fletcher–Goldfarb–Shanno algorithm where the Hessian is constructed using the q-gradient and descent direction is found at each iteration. The algorithm presented in this paper is implemented by applying the independent parameter q in the Armijo–Wolfe conditions to compute the step length which guarantees that the objective function value decreases. The global convergence is established without the convexity assumption on the objective function. Further, the proposed method is verified by the numerical test problems and the results are depicted through the performance profiles.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Ali Kaveh ◽  
Hossein Akbari ◽  
Seyed Milad Hosseini

Purpose This paper aims to present a new physically inspired meta-heuristic algorithm, which is called Plasma Generation Optimization (PGO). To evaluate the performance and capability of the proposed method in comparison to other optimization methods, two sets of test problems consisting of 13 constrained benchmark functions and 6 benchmark trusses are investigated numerically. The results indicate that the performance of the proposed method is competitive with other considered state-of-the-art optimization methods. Design/methodology/approach In this paper, a new physically-based metaheuristic algorithm called plasma generation optimization (PGO) algorithm is developed for solving constrained optimization problems. PGO is a population-based optimizer inspired by the process of plasma generation. In the proposed algorithm, each agent is considered as an electron. Movement of electrons and changing their energy levels are based on simulating excitation, de-excitation and ionization processes occurring through the plasma generation. In the proposed PGO, the global optimum is obtained when plasma is generated with the highest degree of ionization. Findings A new physically-based metaheuristic algorithm called the PGO algorithm is developed that is inspired from the process of plasma generation. Originality/value The results indicate that the performance of the proposed method is competitive with other state-of-the-art methods.


2021 ◽  
Vol 3 (134) ◽  
pp. 31-39
Author(s):  
Anatolii Kosolap

Currently, test problems are used to test the effectiveness of new global optimization methods. In this article, we analyze test global optimization problems to test the numerical efficiency of methods for their solution. At present, about 200 test problems of unconditional optimization and more than 1000 problems of conditional optimization have been developed. We can find these test problems on the Internet. However, most of these test problems are not informative for testing the effectiveness of global optimization methods. The solution of test problems of conditional optimization, as a rule, has trivial solutions. This allows the parameters of the algorithms to be tuned before these solutions are obtained. In test problems of conditional optimization, the accuracy of the fulfillment of constraints is important. Often, small errors in the constraints lead to a significant change in the value of an objective function. Construction of a new package of test problems to test the numerical efficiency of global optimization methods and compare the exact quadratic regularization method with existing methods.The author suggests limiting oneself to test problems of unconstrained optimization with unknown solutions. A package of test problems of unconstrained optimization is pro-posed, which includes known test problems with unknown solutions and modifications of some test problems proposed by the author. We also propose to include in this package J. Nie polynomial functions with unknown solutions. This package of test problems will simplify the verification of the numerical effectiveness of methods. The more effective methods will be those that provide the best solutions. The paper compares existing global optimization methods with the exact quadratic regularization method proposed by the author. This method has shown the best results in solving most of the test problems. This paper presents some of the results of the author's numerical experiments. In particular, the best solutions were obtained for test problems with unknown solutions. This method allows solving multimodal problems of large dimensions and only a local search program is required for its implementation.


Sign in / Sign up

Export Citation Format

Share Document