Non-convex Optimization via Strongly Convex Majorization-minimization

2019 ◽  
Vol 63 (4) ◽  
pp. 726-737
Author(s):  
Azita Mayeli

AbstractIn this paper, we introduce a class of nonsmooth nonconvex optimization problems, and we propose to use a local iterative minimization-majorization (MM) algorithm to find an optimal solution for the optimization problem. The cost functions in our optimization problems are an extension of convex functions with MC separable penalty, which were previously introduced by Ivan Selesnick. These functions are not convex; therefore, convex optimization methods cannot be applied here to prove the existence of optimal minimum point for these functions. For our purpose, we use convex analysis tools to first construct a class of convex majorizers, which approximate the value of non-convex cost function locally, then use the MM algorithm to prove the existence of local minimum. The convergence of the algorithm is guaranteed when the iterative points $x^{(k)}$ are obtained in a ball centred at $x^{(k-1)}$ with small radius. We prove that the algorithm converges to a stationary point (local minimum) of cost function when the surregators are strongly convex.

2015 ◽  
Vol 2015 ◽  
pp. 1-7
Author(s):  
Sakineh Tahmasebzadeh ◽  
Hamidreza Navidi ◽  
Alaeddin Malek

This paper proposes three numerical algorithms based on Karmarkar’s interior point technique for solving nonlinear convex programming problems subject to linear constraints. The first algorithm uses the Karmarkar idea and linearization of the objective function. The second and third algorithms are modification of the first algorithm using the Schrijver and Malek-Naseri approaches, respectively. These three novel schemes are tested against the algorithm of Kebiche-Keraghel-Yassine (KKY). It is shown that these three novel algorithms are more efficient and converge to the correct optimal solution, while the KKY algorithm fails in some cases. Numerical results are given to illustrate the performance of the proposed algorithms.


Geophysics ◽  
2008 ◽  
Vol 73 (5) ◽  
pp. R71-R82 ◽  
Author(s):  
Somanath Misra ◽  
Mauricio D. Sacchi

Linearized-inversion methods often have the disadvantage of dependence on the initial model. When the initial model is far from the global minimum, optimization is likely to converge to a local minimum. Optimization problems involving nonlinear relationships between data and model are likely to have more than one local minimum. Such problems are solved effectively by using global-optimization methods, which are exhaustive search techniques and hence are computationally expensive. As model dimensionality increases, the search space becomes large, making the algorithm very slow in convergence. We propose a new approach to the global-optimization scheme that incorporates a priori knowledge in the algorithm by preconditioning the model space using edge-preserving smoothing operators. Such nonlinear operators acting on the model space favorably precondition or bias the model space for blocky solutions. This approach not only speeds convergence but also retrieves blocky solutions. We apply the algorithm to estimate the layer parameters from the amplitude-variation-with-offset data. The results indicate that global optimization with model-space-preconditioning operators provides faster convergence and yields a more accurate blocky-model solution that is consistent with a priori information.


2020 ◽  
Vol 26 (1) ◽  
pp. 5
Author(s):  
Kalyanmoy Deb ◽  
Proteek Chandan Roy ◽  
Rayan Hussein

Most practical optimization problems are comprised of multiple conflicting objectives and constraints which involve time-consuming simulations. Construction of metamodels of objectives and constraints from a few high-fidelity solutions and a subsequent optimization of metamodels to find in-fill solutions in an iterative manner remain a common metamodeling based optimization strategy. The authors have previously proposed a taxonomy of 10 different metamodeling frameworks for multiobjective optimization problems, each of which constructs metamodels of objectives and constraints independently or in an aggregated manner. Of the 10 frameworks, five follow a generative approach in which a single Pareto-optimal solution is found at a time and other five frameworks were proposed to find multiple Pareto-optimal solutions simultaneously. Of the 10 frameworks, two frameworks (M3-2 and M4-2) are detailed here for the first time involving multimodal optimization methods. In this paper, we also propose an adaptive switching based metamodeling (ASM) approach by switching among all 10 frameworks in successive epochs using a statistical comparison of metamodeling accuracy of all 10 frameworks. On 18 problems from three to five objectives, the ASM approach performs better than the individual frameworks alone. Finally, the ASM approach is compared with three other recently proposed multiobjective metamodeling methods and superior performance of the ASM approach is observed. With growing interest in metamodeling approaches for multiobjective optimization, this paper evaluates existing strategies and proposes a viable adaptive strategy by portraying importance of using an ensemble of metamodeling frameworks for a more reliable multiobjective optimization for a limited budget of solution evaluations.


2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Nian-Ze Hu ◽  
Han-Lin Li ◽  
Jung-Fa Tsai

Packing optimization problems aim to seek the best way of placing a given set of rectangular boxes within a minimum volume rectangular box. Current packing optimization methods either find it difficult to obtain an optimal solution or require too many extra 0-1 variables in the solution process. This study develops a novel method to convert the nonlinear objective function in a packing program into an increasing function with single variable and two fixed parameters. The original packing program then becomes a linear program promising to obtain a global optimum. Such a linear program is decomposed into several subproblems by specifying various parameter values, which is solvable simultaneously by a distributed computation algorithm. A reference solution obtained by applying a genetic algorithm is used as an upper bound of the optimal solution, used to reduce the entire search region.


Sign in / Sign up

Export Citation Format

Share Document