scholarly journals Learning for Constrained Optimization: Identifying Optimal Active Constraint Sets

Author(s):  
Sidhant Misra ◽  
Line Roald ◽  
Yeesian Ng

In many engineered systems, optimization is used for decision making at time scales ranging from real-time operation to long-term planning. This process often involves solving similar optimization problems over and over again with slightly modified input parameters, often under tight latency requirements. We consider the problem of using the information available through this repeated solution process to learn important characteristics of the optimal solution as a function of the input parameters. Our proposed method is based on learning relevant sets of active constraints, from which the optimal solution can be obtained efficiently. Using active sets as features preserves information about the physics of the system, enables interpretable results, accounts for relevant safety constraints, and is easy to represent and encode. However, the total number of active sets is also very large, as it grows exponentially with system size. The key contribution of this paper is a streaming algorithm that learns the relevant active sets from training samples consisting of the input parameters and the corresponding optimal solution, without any restrictions on the problem type, problem structure or probability distribution of the input parameters. The algorithm comes with theoretical performance guarantees and is shown to converge fast for problem instances with a small number of relevant active sets. It can thus be used to establish simultaneously learn the relevant active sets and the practicability of the learning method. Through case studies in optimal power flow, supply chain planning, and shortest path routing, we demonstrate that often only a few active sets are relevant in practice, suggesting that active sets provide an appropriate level of abstraction for a learning algorithm to target.

2012 ◽  
Vol 2012 ◽  
pp. 1-13 ◽  
Author(s):  
Nian-Ze Hu ◽  
Han-Lin Li ◽  
Jung-Fa Tsai

Packing optimization problems aim to seek the best way of placing a given set of rectangular boxes within a minimum volume rectangular box. Current packing optimization methods either find it difficult to obtain an optimal solution or require too many extra 0-1 variables in the solution process. This study develops a novel method to convert the nonlinear objective function in a packing program into an increasing function with single variable and two fixed parameters. The original packing program then becomes a linear program promising to obtain a global optimum. Such a linear program is decomposed into several subproblems by specifying various parameter values, which is solvable simultaneously by a distributed computation algorithm. A reference solution obtained by applying a genetic algorithm is used as an upper bound of the optimal solution, used to reduce the entire search region.


2019 ◽  
Vol 29 (07) ◽  
pp. 2050112
Author(s):  
Renuka Kamdar ◽  
Priyanka Paliwal ◽  
Yogendra Kumar

The goal to provide faster and optimal solution to complex and high-dimensional problem is pushing the technical envelope related to new algorithms. While many approaches use centralized strategies, the concept of multi-agent systems (MASS) is creating a new option related to distributed analyses for the optimization problems. A novel learning algorithm for solving the global numerical optimization problems is proposed. The proposed learning algorithm integrates the multi-agent system and the hybrid butterfly–particle swarm optimization (BFPSO) algorithm. Thus it is named as multi-agent-based BFPSO (MABFPSO). In order to obtain the optimal solution quickly, each agent competes and cooperates with its neighbors and it can also learn by using its knowledge. Making use of these agent–agent interactions and sensitivity and probability mechanism of BFPSO, MABFPSO realizes the purpose of optimizing the value of objective function. The designed MABFPSO algorithm is tested on specific benchmark functions. Simulations of the proposed algorithm have been performed for the optimization of functions of 2, 20 and 30 dimensions. The comparative simulation results with conventional PSO approaches demonstrate that the proposed algorithm is a potential candidate for optimization of both low-and high-dimensional functions. The optimization strategy is general and can be used to solve other power system optimization problems as well.


2021 ◽  
Author(s):  
Sayed Abdullah Sadat ◽  
mostafa Sahraei-Ardakani

Successive linear programming (SLP) is a practical approach for solving large-scale nonlinear optimization problems. Alternating current optimal power flow (ACOPF) is no exception, particularly the large size of real-world networks. However, in order to achieve tractability, it is essential to tune the SLP algorithm presented in the literature. This paper presents a modified SLP algorithm to solve the ACOPF problem, specified by the U.S. Department of Energy's (DOE) Grid Optimization (GO) Competition Challenge 1, within strict time limits. The algorithm first finds a near-optimal solution for the relaxed problem (i.e., Stage 1). Then, it finds a feasible solution in the proximity of the near-optimal solution (i.e., Stage 2 and Stage 3). The numerical experiments on test cases ranging from 500-bus to 30,000-bus systems show that the algorithm is tractable. The results show that our proposed algorithm is tractable and can solve more than 80\% of test cases faster than the well-known Interior Point Method while significantly reduce the number of iterations required to solve ACOPF. The number of iterations is considered an important factor in the examination of tractability which can drastically reduce the computational time required within each iteration.


2021 ◽  
Author(s):  
Sayed Abdullah Sadat ◽  
mostafa Sahraei-Ardakani

Successive linear programming (SLP) is a practical approach for solving large-scale nonlinear optimization problems. Alternating current optimal power flow (ACOPF) is no exception, particularly the large size of real-world networks. However, in order to achieve tractability, it is essential to tune the SLP algorithm presented in the literature. This paper presents a modified SLP algorithm to solve the ACOPF problem, specified by the U.S. Department of Energy's (DOE) Grid Optimization (GO) Competition Challenge 1, within strict time limits. The algorithm first finds a near-optimal solution for the relaxed problem (i.e., Stage 1). Then, it finds a feasible solution in the proximity of the near-optimal solution (i.e., Stage 2 and Stage 3). The numerical experiments on test cases ranging from 500-bus to 30,000-bus systems show that the algorithm is tractable. The results show that our proposed algorithm is tractable and can solve more than 80\% of test cases faster than the well-known Interior Point Method while significantly reduce the number of iterations required to solve ACOPF. The number of iterations is considered an important factor in the examination of tractability which can drastically reduce the computational time required within each iteration.


2013 ◽  
Vol 2013 ◽  
pp. 1-10
Author(s):  
Hamid Reza Erfanian ◽  
M. H. Noori Skandari ◽  
A. V. Kamyad

We present a new approach for solving nonsmooth optimization problems and a system of nonsmooth equations which is based on generalized derivative. For this purpose, we introduce the first order of generalized Taylor expansion of nonsmooth functions and replace it with smooth functions. In other words, nonsmooth function is approximated by a piecewise linear function based on generalized derivative. In the next step, we solve smooth linear optimization problem whose optimal solution is an approximate solution of main problem. Then, we apply the results for solving system of nonsmooth equations. Finally, for efficiency of our approach some numerical examples have been presented.


2012 ◽  
Vol 215-216 ◽  
pp. 592-596
Author(s):  
Li Gao ◽  
Rong Rong Wang

In order to deal with complex product design optimization problems with both discrete and continuous variables, mix-variable collaborative design optimization algorithm is put forward based on collaborative optimization, which is an efficient way to solve mix-variable design optimization problems. On the rule of “divide and rule”, the algorithm decouples the problem into some relatively simple subsystems. Then by using collaborative mechanism, the optimal solution is obtained. Finally, the result of a case shows the feasibility and effectiveness of the new algorithm.


1995 ◽  
Vol 117 (1) ◽  
pp. 155-157 ◽  
Author(s):  
F. C. Anderson ◽  
J. M. Ziegler ◽  
M. G. Pandy ◽  
R. T. Whalen

We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.


2021 ◽  
Vol 1 (2) ◽  
pp. 1-23
Author(s):  
Arkadiy Dushatskiy ◽  
Tanja Alderliesten ◽  
Peter A. N. Bosman

Surrogate-assisted evolutionary algorithms have the potential to be of high value for real-world optimization problems when fitness evaluations are expensive, limiting the number of evaluations that can be performed. In this article, we consider the domain of pseudo-Boolean functions in a black-box setting. Moreover, instead of using a surrogate model as an approximation of a fitness function, we propose to precisely learn the coefficients of the Walsh decomposition of a fitness function and use the Walsh decomposition as a surrogate. If the coefficients are learned correctly, then the Walsh decomposition values perfectly match with the fitness function, and, thus, the optimal solution to the problem can be found by optimizing the surrogate without any additional evaluations of the original fitness function. It is known that the Walsh coefficients can be efficiently learned for pseudo-Boolean functions with k -bounded epistasis and known problem structure. We propose to learn dependencies between variables first and, therefore, substantially reduce the number of Walsh coefficients to be calculated. After the accurate Walsh decomposition is obtained, the surrogate model is optimized using GOMEA, which is considered to be a state-of-the-art binary optimization algorithm. We compare the proposed approach with standard GOMEA and two other Walsh decomposition-based algorithms. The benchmark functions in the experiments are well-known trap functions, NK-landscapes, MaxCut, and MAX3SAT problems. The experimental results demonstrate that the proposed approach is scalable at the supposed complexity of O (ℓ log ℓ) function evaluations when the number of subfunctions is O (ℓ) and all subfunctions are k -bounded, outperforming all considered algorithms.


Author(s):  
Weilin Nie ◽  
Cheng Wang

Abstract Online learning is a classical algorithm for optimization problems. Due to its low computational cost, it has been widely used in many aspects of machine learning and statistical learning. Its convergence performance depends heavily on the step size. In this paper, a two-stage step size is proposed for the unregularized online learning algorithm, based on reproducing Kernels. Theoretically, we prove that, such an algorithm can achieve a nearly min–max convergence rate, up to some logarithmic term, without any capacity condition.


2022 ◽  
Vol 0 (0) ◽  
Author(s):  
Fouzia Amir ◽  
Ali Farajzadeh ◽  
Jehad Alzabut

Abstract Multiobjective optimization is the optimization with several conflicting objective functions. However, it is generally tough to find an optimal solution that satisfies all objectives from a mathematical frame of reference. The main objective of this article is to present an improved proximal method involving quasi-distance for constrained multiobjective optimization problems under the locally Lipschitz condition of the cost function. An instigation to study the proximal method with quasi distances is due to its widespread applications of the quasi distances in computer theory. To study the convergence result, Fritz John’s necessary optimality condition for weak Pareto solution is used. The suitable conditions to guarantee that the cluster points of the generated sequences are Pareto–Clarke critical points are provided.


Sign in / Sign up

Export Citation Format

Share Document