scholarly journals Hypergraph Optimization Problems: Why is the Objective Function Linear?

1996 ◽  
Vol 3 (50) ◽  
Author(s):  
Aleksandar Pekec

Choosing an objective function for an optimization problem is a<br />modeling issue and there is no a-priori reason that the objective function<br />must be linear. Still, it seems that linear 0-1 programming formulations<br />are overwhelmingly used as models for optimization problems<br />over discrete structures. We show that this is not an accident. Under<br />some reasonable conditions (from the modeling point of view), the<br />linear objective function is the only possible one.

10.29007/2k64 ◽  
2018 ◽  
Author(s):  
Pat Prodanovic ◽  
Cedric Goeury ◽  
Fabrice Zaoui ◽  
Riadh Ata ◽  
Jacques Fontaine ◽  
...  

This paper presents a practical methodology developed for shape optimization studies of hydraulic structures using environmental numerical modelling codes. The methodology starts by defining the optimization problem and identifying relevant problem constraints. Design variables in shape optimization studies are configuration of structures (such as length or spacing of groins, orientation and layout of breakwaters, etc.) whose optimal orientation is not known a priori. The optimization problem is solved numerically by coupling an optimization algorithm to a numerical model. The coupled system is able to define, test and evaluate a multitude of new shapes, which are internally generated and then simulated using a numerical model. The developed methodology is tested using an example of an optimum design of a fish passage, where the design variables are the length and the position of slots. In this paper an objective function is defined where a target is specified and the numerical optimizer is asked to retrieve the target solution. Such a definition of the objective function is used to validate the developed tool chain. This work uses the numerical model TELEMAC- 2Dfrom the TELEMAC-MASCARET suite of numerical solvers for the solution of shallow water equations, coupled with various numerical optimization algorithms available in the literature.


2017 ◽  
Vol 7 (1) ◽  
pp. 137-150
Author(s):  
Агапов ◽  
Aleksandr Agapov

For the first time the mathematical model of task optimization for this scheme of cutting logs, including the objective function and six equations of connection. The article discusses Pythagorean area of the logs. Therefore, the target function is represented as the sum of the cross-sectional areas of edging boards. Equation of the relationship represents the relationship of the diameter of the logs in the vertex end with the size of the resulting edging boards. This relationship is described through the use of the Pythagorean Theorem. Such a representation of the mathematical model of optimization task is considered a classic one. However, the solution of this mathematical model by the classic method is proved to be problematic. For the solution of the mathematical model we used the method of Lagrange multipliers. Solution algorithm to determine the optimal dimensions of the beams and side edging boards taking into account the width of cut is suggested. Using a numerical method, optimal dimensions of the beams and planks are determined, in which the objective function takes the maximum value. It turned out that with the increase of the width of the cut, thickness of the beam increases and the dimensions of the side edging boards reduce. Dimensions of the extreme side planks to increase the width of cut is reduced to a greater extent than the side boards, which are located closer to the center of the log. The algorithm for solving the optimization problem is recommended to use for calculation and preparation of sawing schedule in the design and operation of sawmill lines for timber production. When using the proposed algorithm for solving the optimization problem the output of lumber can be increased to 3-5 %.


2021 ◽  
Vol 5 (1) ◽  
pp. 42
Author(s):  
Frainskoy Rio Naibaho

Optimization is a step to solve a problem to get more profitable results. Profitable based on the point of view used or the desired needs. The optimization value can be profitable in the maximum position or profitable in the minimum position. A problem can be solved in different ways, to produce the best solution. The best conditions can be viewed from many things, including tolerance, methods, and problems. Many theories have been developed to solve optimization problems. This optimization problem is often discussed because it is very close to human life. In this case, optimization can be interpreted as the process of achieving the most optimal results by adjusting input, selecting equipment, mathematical processes, and testing. Thereby in this paper, the Partitioning Around Medoids (PAM) method has succeeded in optimizing class grouping by calculating the closest distance between the achievement and intelligence of each student.


2016 ◽  
Vol 2016 ◽  
pp. 1-16
Author(s):  
Qingfa Li ◽  
Yaqiu Liu ◽  
Liangkuan Zhu

We propose a one-layer neural network for solving a class of constrained optimization problems, which is brought forward from the MDF continuous hot-pressing process. The objective function of the optimization problem is the sum of a nonsmooth convex function and a smooth nonconvex pseudoconvex function, and the feasible set consists of two parts, one is a closed convex subset ofRn, and the other is defined by a class of smooth convex functions. By the theories of smoothing techniques, projection, penalty function, and regularization term, the proposed network is modeled by a differential equation, which can be implemented easily. Without any other condition, we prove the global existence of the solutions of the proposed neural network with any initial point in the closed convex subset. We show that any accumulation point of the solutions of the proposed neural network is not only a feasible point, but also an optimal solution of the considered optimization problem though the objective function is not convex. Numerical experiments on the MDF hot-pressing process including the model building and parameter optimization are tested based on the real data set, which indicate the good performance of the proposed neural network in applications.


SPE Journal ◽  
2021 ◽  
pp. 1-28
Author(s):  
Faruk Alpak ◽  
Vivek Jain ◽  
Yixuan Wang ◽  
Guohua Gao

Summary We describe the development and validation of a novel algorithm for field-development optimization problems and document field-testing results. Our algorithm is founded on recent developments in bound-constrained multiobjective optimization of nonsmooth functions for problems in which the structure of the objective functions either cannot be exploited or are nonexistent. Such situations typically arise when the functions are computed as the result of numerical modeling, such as reservoir-flow simulation within the context of field-development planning and reservoir management. We propose an efficient implementation of a novel parallel algorithm, namely BiMADS++, for the biobjective optimization problem. Biobjective optimization is a special case of multiobjective optimization with the property that Pareto points may be ordered, which is extensively exploited by the BiMADS++ algorithm. The optimization algorithm generates an approximation of the Pareto front by solving a series of single-objective formulations of the biobjective optimization problem. These single-objective problems are solved using a new and more efficient implementation of the mesh adaptive direct search (MADS) algorithm, developed for nonsmooth optimization problems that arise within reservoir-simulation-based optimization workflows. The MADS algorithm is extensively benchmarked against alternative single-objective optimization techniques before the BiMADS++ implementation. Both the MADS optimization engine and the master BiMADS++ algorithm are implemented from the ground up by resorting to a distributed parallel computing paradigm using message passing interface (MPI) for efficiency in industrial-scaleproblems. BiMADS++ is validated and field tested on well-location optimization (WLO) problems. We first validate and benchmark the accuracy and computational performance of the MADS implementation against a number of alternative parallel optimizers [e.g., particle-swarm optimization (PSO), genetic algorithm (GA), and simultaneous perturbation and multivariate interpolation (SPMI)] within the context of single-objective optimization. We also validate the BiMADS++ implementation using a challenging analytical problem that gives rise to a discontinuous Pareto front. We then present BiMADS++ WLO applications on two simple, intuitive, and yet realistic problems, and a model for a real problem with known Pareto front. Finally, we discuss the results of the field-testing work on three real-field deepwater models. The BiMADS++ implementation enables the user to identify various compromise solutions of the WLO problem with a single optimization run without resorting to ad hoc adjustments of penalty weights in the objective function. Elimination of this “trial-and-error” procedure and distributed parallel implementation renders BiMADS++ easy to use and significantly more efficient in terms of computational speed needed to determine alternative compromise solutions of a given WLO problem at hand. In a field-testing example, BiMADS++ delivered a workflow speedup of greater than fourfold with a single biobjective optimization run over the weighted-sumsobjective-function approach, which requires multiple single-objective-function optimization runs.


2011 ◽  
Vol 19 (4) ◽  
pp. 597-637 ◽  
Author(s):  
Francisco Chicano ◽  
L. Darrell Whitley ◽  
Enrique Alba

A small number of combinatorial optimization problems have search spaces that correspond to elementary landscapes, where the objective function f is an eigenfunction of the Laplacian that describes the neighborhood structure of the search space. Many problems are not elementary; however, the objective function of a combinatorial optimization problem can always be expressed as a superposition of multiple elementary landscapes if the underlying neighborhood used is symmetric. This paper presents theoretical results that provide the foundation for algebraic methods that can be used to decompose the objective function of an arbitrary combinatorial optimization problem into a sum of subfunctions, where each subfunction is an elementary landscape. Many steps of this process can be automated, and indeed a software tool could be developed that assists the researcher in finding a landscape decomposition. This methodology is then used to show that the subset sum problem is a superposition of two elementary landscapes, and to show that the quadratic assignment problem is a superposition of three elementary landscapes.


2014 ◽  
Vol 519-520 ◽  
pp. 811-815
Author(s):  
Xiao Hong Qiu ◽  
Yong Bo Tan ◽  
Bo Li

The fractals of the optimization problems are first discussed. The multi-fractal parameters of the optimal objective function are computed by the Detrended Fluctuation Analysis (DFA) method. The multi-fractal general Hurst Index is related to the difficulty to solve the optimization problem. These features are verified by analyzing the first six test functions proposed on 2005 IEEE Congress on Evolutionary Computation. The results show that the different objective functions have obvious different multifractal and the general Hurst Index can be used to evaluate the difficulty to solve the optimization problem.


Sign in / Sign up

Export Citation Format

Share Document