An Optimization Problems with a Composite Objective Function

Author(s):  
Alexander J. Zaslavski
Author(s):  
Pengfei (Taylor) Li ◽  
Peirong (Slade) Wang ◽  
Farzana Chowdhury ◽  
Li Zhang

Traditional formulations for transportation optimization problems mostly build complicating attributes into constraints while keeping the succinctness of objective functions. A popular solution is the Lagrangian decomposition by relaxing complicating constraints and then solving iteratively. Although this approach is effective for many problems, it generates intractability in other problems. To address this issue, this paper presents an alternative formulation for transportation optimization problems in which the complicating attributes of target problems are partially or entirely built into the objective function instead of into the constraints. Many mathematical complicating constraints in transportation problems can be efficiently modeled in dynamic network loading (DNL) models based on the demand–supply equilibrium, such as the various road or vehicle capacity constraints or “IF–THEN” type constraints. After “pre-building” complicating constraints into the objective functions, the objective function can be approximated well with customized high-fidelity DNL models. Three types of computing benefits can be achieved in the alternative formulation: ( a) the original problem will be kept the same; ( b) computing complexity of the new formulation may be significantly reduced because of the disappearance of hard constraints; ( c) efficiency loss on the objective function side can be mitigated via multiple high-performance computing techniques. Under this new framework, high-fidelity and problem-specific DNL models will be critical to maintain the attributes of original problems. Therefore, the authors’ recent efforts in enhancing the DNL’s fidelity and computing efficiency are also described in the second part of this paper. Finally, a demonstration case study is conducted to validate the new approach.


2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Yaoxin Li ◽  
Jing Liu ◽  
Guozheng Lin ◽  
Yueyuan Hou ◽  
Muyun Mou ◽  
...  

AbstractIn computer science, there exist a large number of optimization problems defined on graphs, that is to find a best node state configuration or a network structure, such that the designed objective function is optimized under some constraints. However, these problems are notorious for their hardness to solve, because most of them are NP-hard or NP-complete. Although traditional general methods such as simulated annealing (SA), genetic algorithms (GA), and so forth have been devised to these hard problems, their accuracy and time consumption are not satisfying in practice. In this work, we proposed a simple, fast, and general algorithm framework based on advanced automatic differentiation technique empowered by deep learning frameworks. By introducing Gumbel-softmax technique, we can optimize the objective function directly by gradient descent algorithm regardless of the discrete nature of variables. We also introduce evolution strategy to parallel version of our algorithm. We test our algorithm on four representative optimization problems on graph including modularity optimization from network science, Sherrington–Kirkpatrick (SK) model from statistical physics, maximum independent set (MIS) and minimum vertex cover (MVC) problem from combinatorial optimization on graph, and Influence Maximization problem from computational social science. High-quality solutions can be obtained with much less time-consuming compared to the traditional approaches.


Author(s):  
T. E. Potter ◽  
K. D. Willmert ◽  
M. Sathyamoorthy

Abstract Mechanism path generation problems which use link deformations to improve the design lead to optimization problems involving a nonlinear sum-of-squares objective function subjected to a set of linear and nonlinear constraints. Inclusion of the deformation analysis causes the objective function evaluation to be computationally expensive. An optimization method is presented which requires relatively few objective function evaluations. The algorithm, based on the Gauss method for unconstrained problems, is developed as an extension of the Gauss constrained technique for linear constraints and revises the Gauss nonlinearly constrained method for quadratic constraints. The derivation of the algorithm, using a Lagrange multiplier approach, is based on the Kuhn-Tucker conditions so that when the iteration process terminates, these conditions are automatically satisfied. Although the technique was developed for mechanism problems, it is applicable to any optimization problem having the form of a sum of squares objective function subjected to nonlinear constraints.


2015 ◽  
Vol 6 (3) ◽  
pp. 55-60
Author(s):  
Pritibhushan Sinha

Abstract We consider the median solution of the Newsvendor Problem. Some properties of such a solution are shown through a theoretical analysis and a numerical experiment. Sometimes, though not often, median solution may be better than solutions maximizing expected profit, or maximizing minimum possible, over distribution with the same average and standard deviation, expected profit, according to some criteria. We discuss the practical suitability of the objective function set and the solution derived, for the Newsvendor Problem, and other such random optimization problems.


2016 ◽  
Vol 38 (4) ◽  
pp. 307-317
Author(s):  
Pham Hoang Anh

In this paper, the optimal sizing of truss structures is solved using a novel evolutionary-based optimization algorithm. The efficiency of the proposed method lies in the combination of global search and local search, in which the global move is applied for a set of random solutions whereas the local move is performed on the other solutions in the search population. Three truss sizing benchmark problems with discrete variables are used to examine the performance of the proposed algorithm. Objective functions of the optimization problems are minimum weights of the whole truss structures and constraints are stress in members and displacement at nodes. Here, the constraints and objective function are treated separately so that both function and constraint evaluations can be saved. The results show that the new algorithm can find optimal solution effectively and it is competitive with some recent metaheuristic algorithms in terms of number of structural analyses required.


Fresa implements a nature inspired plant propagation algorithm for the solution of single and multiple objective optimization problems. The method is population based and evolutionary. Treating the objective function as a black box, the implementation is able to solve problems exhibiting behaviour that is challenging for mathematical programming methods. Fresa is easily adapted to new problems which may benefit from bespoke representations of solutions by taking advantage of the dynamic typing and multiple dispatch capabilities of the Julia language. Further, the support for threads in Julia enables an efficient implementation on multi-core computers.


2017 ◽  
Vol 7 (1) ◽  
pp. 137-150
Author(s):  
Агапов ◽  
Aleksandr Agapov

For the first time the mathematical model of task optimization for this scheme of cutting logs, including the objective function and six equations of connection. The article discusses Pythagorean area of the logs. Therefore, the target function is represented as the sum of the cross-sectional areas of edging boards. Equation of the relationship represents the relationship of the diameter of the logs in the vertex end with the size of the resulting edging boards. This relationship is described through the use of the Pythagorean Theorem. Such a representation of the mathematical model of optimization task is considered a classic one. However, the solution of this mathematical model by the classic method is proved to be problematic. For the solution of the mathematical model we used the method of Lagrange multipliers. Solution algorithm to determine the optimal dimensions of the beams and side edging boards taking into account the width of cut is suggested. Using a numerical method, optimal dimensions of the beams and planks are determined, in which the objective function takes the maximum value. It turned out that with the increase of the width of the cut, thickness of the beam increases and the dimensions of the side edging boards reduce. Dimensions of the extreme side planks to increase the width of cut is reduced to a greater extent than the side boards, which are located closer to the center of the log. The algorithm for solving the optimization problem is recommended to use for calculation and preparation of sawing schedule in the design and operation of sawmill lines for timber production. When using the proposed algorithm for solving the optimization problem the output of lumber can be increased to 3-5 %.


Author(s):  
Ashok V. Kumar ◽  
David C. Gossard

Abstract A sequential approximation technique for non-linear programming is presented here that is particularly suited for problems in engineering design and structural optimization, where the number of variables are very large and function and sensitivity evaluations are computationally expensive. A sequence of sub-problems are iteratively generated using a linear approximation for the objective function and setting move limits on the variables using a barrier method. These sub-problems are strictly convex. Computation per iteration is significantly reduced by not solving the sub-problems exactly. Instead at each iteration, a few Newton-steps are taken for the sub-problem. A criteria for moving the move limit, is described that reduces or eliminates stepsize reduction during line search. The method was found to perform well for unconstrained and linearly constrained optimization problems. It requires very few function evaluations, does not require the hessian of the objective function and evaluates its gradient only once per iteration.


2019 ◽  
Vol 31 (4) ◽  
pp. 689-702 ◽  
Author(s):  
Juliane Müller ◽  
Marcus Day

We introduce the algorithm SHEBO (surrogate optimization of problems with hidden constraints and expensive black-box objectives), an efficient optimization algorithm that employs surrogate models to solve computationally expensive black-box simulation optimization problems that have hidden constraints. Hidden constraints are encountered when the objective function evaluation does not return a value for a parameter vector. These constraints are often encountered in optimization problems in which the objective function is computed by a black-box simulation code. SHEBO uses a combination of local and global search strategies together with an evaluability prediction function and a dynamically adjusted evaluability threshold to iteratively select new sample points. We compare the performance of our algorithm with that of the mesh-based algorithms mesh adaptive direct search (MADS, NOMAD [nonlinear optimization by mesh adaptive direct search] implementation) and implicit filtering and SNOBFIT (stable noisy optimization by branch and fit), which assigns artificial function values to points that violate the hidden constraints. Our numerical experiments for a large set of test problems with 2–30 dimensions and a 31-dimensional real-world application problem arising in combustion simulation show that SHEBO is an efficient solver that outperforms the other methods for many test problems.


Sign in / Sign up

Export Citation Format

Share Document