scholarly journals Superlinear Convergence of a Modified Newton's Method for Convex Optimization Problems With Constraints

2021 ◽  
Vol 13 (2) ◽  
pp. 90
Author(s):  
Bouchta RHANIZAR

We consider the constrained optimization problem  defined by: $$f (x^*) = \min_{x \in  X} f(x)\eqno (1)$$ where the function  f : \pmb{\mathbb{R}}^{n} → \pmb{\mathbb{R}} is convex  on a closed bounded convex set X. To solve problem (1), most methods transform this problem into a problem without constraints, either by introducing Lagrange multipliers or a projection method. The purpose of this paper is to give a new method to solve some constrained optimization problems, based on the definition of a descent direction and a step while remaining in the X convex domain. A convergence theorem is proven. The paper ends with some numerical examples.

2013 ◽  
Vol 479-480 ◽  
pp. 861-864
Author(s):  
Yi Chih Hsieh ◽  
Peng Sheng You

In this paper, an artificial evolutionary based two-phase approach is proposed for solving the nonlinear constrained optimization problems. In the first phase, an immune based algorithm is applied to solve the nonlinear constrained optimization problem approximately. In the second phase, we present a procedure to improve the solutions obtained by the first phase. Numerical results of two benchmark problems are reported and compared. As shown, the solutions by the new proposed approach are all superior to those best solutions by typical approaches in the literature.


2016 ◽  
Vol 19 (1) ◽  
pp. 143-167 ◽  
Author(s):  
Andrew T. Barker ◽  
Tyrone Rees ◽  
Martin Stoll

AbstractIn this paper we consider PDE-constrained optimization problems which incorporate anH1regularization control term. We focus on a time-dependent PDE, and consider both distributed and boundary control. The problems we consider include bound constraints on the state, and we use a Moreau-Yosida penalty function to handle this. We propose Krylov solvers and Schur complement preconditioning strategies for the different problems and illustrate their performance with numerical examples.


Author(s):  
Xinghuo Yu ◽  
◽  
Weixing Zheng ◽  
Baolin Wu ◽  
Xin Yao ◽  
...  

In this paper, a novel penalty function approach is proposed for constrained optimization problems with linear and nonlinear constraints. It is shown that by using a mapping function to "wrap" up the constraints, a constrained optimization problem can be converted to an unconstrained optimization problem. It is also proved mathematically that the best solution of the converted unconstrained optimization problem will approach the best solution of the constrained optimization problem if the tuning parameter for the wrapping function approaches zero. A tailored genetic algorithm incorporating an adaptive tuning method is then used to search for the global optimal solutions of the converted unconstrained optimization problems. Four test examples were used to show the effectiveness of the approach.


2014 ◽  
Vol 2014 ◽  
pp. 1-6
Author(s):  
Zhijun Luo ◽  
Lirong Wang

A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.


2014 ◽  
Vol 962-965 ◽  
pp. 2903-2908
Author(s):  
Yun Lian Liu ◽  
Wen Li ◽  
Tie Bin Wu ◽  
Yun Cheng ◽  
Tao Yun Zhou ◽  
...  

An improved multi-objective genetic algorithm is proposed to solve constrained optimization problems. The constrained optimization problem is converted into a multi-objective optimization problem. In the evolution process, our algorithm is based on multi-objective technique, where the population is divided into dominated and non-dominated subpopulation. Arithmetic crossover operator is utilized for the randomly selected individuals from dominated and non-dominated subpopulation, respectively. The crossover operator can lead gradually the individuals to the extreme point and improve the local searching ability. Diversity mutation operator is introduced for non-dominated subpopulation. Through testing the performance of the proposed algorithm on 3 benchmark functions and 1 engineering optimization problems, and comparing with other meta-heuristics, the result of simulation shows that the proposed algorithm has great ability of global search. Keywords: multi-objective optimization;genetic algorithm;constrained optimization problem;engineering application


2020 ◽  
Vol 12 (5) ◽  
pp. 27
Author(s):  
Bouchta x RHANIZAR

We consider the constrained optimization problem  defined by: $$f(x^*) = \min_{x \in  X} f(x) \eqno (1)$$ where the function  $f$ : $ \pmb{\mathbb{R}}^{n} \longrightarrow \pmb{\mathbb{R}}$ is convex  on a closed convex set X. In this work, we will give a new method to solve problem (1) without bringing it back to an unconstrained problem. We study the convergence of this new method and give numerical examples.


Author(s):  
Rudy Chocat ◽  
Loïc Brevault ◽  
Mathieu Balesdent ◽  
Sébastien Defoort

The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ) optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.


2014 ◽  
Vol 536-537 ◽  
pp. 476-480 ◽  
Author(s):  
Wen Long

The most existing constrained optimization evolutionary algorithms (COEAs) for solving constrained optimization problems (COPs) only focus on combining a single EA with a single constraint-handling technique (CHT). As a result, the search ability of these algorithms could be limited. Motivated by these observations, we propose an ensemble method which combines different style of EA and CHT from the EA knowledge-base and the CHT knowledge-base, respectively. The proposed method uses two EAs and two CHTs. It randomly combines them to generate novel offspring individuals during each generation. Simulations and comparisons based on four benchmark COPs and engineering optimization problem demonstrate the effectiveness of the proposed approach.


2018 ◽  
Vol 1 (1) ◽  
pp. 037-043
Author(s):  
Theresia Mehwani Manik ◽  
Parapat Gultom ◽  
Esther Nababan

Optimasi adalah suatu aktivitas untuk mendapatkan hasil terbaik di dalam suatu keadaan yang diberikan. Tujuan akhir dari aktivitas tersebut adalah meminimumkan usaha (effort) atau memaksimumkan manfaat (benefit) yang diinginkan. Metode pengali Lagrange merupakan metode yang digunakan untuk menangani permasalahan optimasi berkendala. Pada penelitian ini dianalisis karakteristik dari metode pengali Lagrange sehingga metode ini dapat menyelesaikan permasalahan optimasi berkendala. Metode tersebut diaplikasikan pada salah satu contoh optimasi berkendala untuk meminimumkan fungsi objektif kuadrat sehingga diperolehlah nilai minimum dari fungsi objektif kuadrat adalah -0.0403. Banyak masalah optimasi tidak dapat diselesaikan dikarenakan kendala yang membatasi fungsi objektif. Salah satu karakteristik dari metode pengali Lagrange adalah dapat mentransformasi persoalan optimasi berkendala menjadi persoalan optimasi tanpa kendala. Dengan demikian persoalan optimasi dapat diselesaikan.   Optimization is an activity to get the best results in a given situation. The ultimate goal of the activity is to minimize the effort or maximize the desired benefits. The Lagrange multiplier method is a method used to handle constrained optimization problems. This study analyzed the characteristics of the Lagrange multiplier method with the aim of solving constrained optimization problems. The method was applied to one sample of constrained optimization to minimize the objective function of squares and resulted -0.0403 as the minimum value of the objective quadratic function. Many optimization problems could not be solved due to constraints that limited objective functions. One of the characteristics of the Lagrange multiplier method was that it could transform constrained optimization problems into non-constrained ones. Thus the optimization problem could be resolved. 


2019 ◽  
Vol 7 (6) ◽  
pp. 532-549
Author(s):  
Chun-an Liu ◽  
Huamin Jia

Abstract Nonlinear constrained optimization problem (NCOP) has been arisen in a diverse range of sciences such as portfolio, economic management, airspace engineering and intelligence system etc. In this paper, a new multiobjective imperialist competitive algorithm for solving NCOP is proposed. First, we review some existing excellent algorithms for solving NOCP; then, the nonlinear constrained optimization problem is transformed into a biobjective optimization problem. Second, in order to improve the diversity of evolution country swarm, and help the evolution country swarm to approach or land into the feasible region of the search space, three kinds of different methods of colony moving toward their relevant imperialist are given. Thirdly, the new operator for exchanging position of the imperialist and colony is given similar as a recombination operator in genetic algorithm to enrich the exploration and exploitation abilities of the proposed algorithm. Fourth, a local search method is also presented in order to accelerate the convergence speed. At last, the new approach is tested on thirteen well-known NP-hard nonlinear constrained optimization functions, and the experiment evidences suggest that the proposed method is robust, efficient, and generic when solving nonlinear constrained optimization problem. Compared with some other state-of-the-art algorithms, the proposed algorithm has remarkable advantages in terms of the best, mean, and worst objective function value and the standard deviations.


Sign in / Sign up

Export Citation Format

Share Document