scholarly journals Fast and reliable transient simulation and continuous optimization of large-scale gas networks

Author(s):  
Pia Domschke ◽  
Oliver Kolb ◽  
Jens Lang

AbstractWe are concerned with the simulation and optimization of large-scale gas pipeline systems in an error-controlled environment. The gas flow dynamics is locally approximated by sufficiently accurate physical models taken from a hierarchy of decreasing complexity and varying over time. Feasible work regions of compressor stations consisting of several turbo compressors are included by semiconvex approximations of aggregated characteristic fields. A discrete adjoint approach within a first-discretize-then-optimize strategy is proposed and a sequential quadratic programming with an active set strategy is applied to solve the nonlinear constrained optimization problems resulting from a validation of nominations. The method proposed here accelerates the computation of near-term forecasts of sudden changes in the gas management and allows for an economic control of intra-day gas flow schedules in large networks. Case studies for real gas pipeline systems show the remarkable performance of the new method.

2018 ◽  
Vol 3 (1) ◽  
pp. 48 ◽  
Author(s):  
Ahmet Cevahir Cinar ◽  
Hazim Iscan ◽  
Mustafa Servet Kiran

Population-based swarm or evolutionary computation algorithms in optimization are attracted the interest of the researchers due their simple structure, optimization performance, easy-adaptation. Binary optimization problems can be also solved by using these algorithms. This paper focuses on solving large scale binary optimization problems by using Tree-Seed Algorithm (TSA) proposed for solving continuous optimization problems by imitating relationship between the trees and their seeds in nature. The basic TSA is modified by using xor logic gate for solving binary optimization problems in this study. In order to investigate the performance of the proposed algorithm, the numeric benchmark problems with the different dimensions are considered and obtained results show that the proposed algorithm produces effective and comparable solutions in terms of solution quality.Keywords: binary optimization, tree-seed algorithm, xor-gate, large-scale optimization


Author(s):  
Zhi-Hui Zhan ◽  
Lin Shi ◽  
Kay Chen Tan ◽  
Jun Zhang

AbstractComplex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big data also make optimization problems with more challenges including Many-dimensions, Many-changes, Many-optima, Many-constraints, and Many-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.


Mathematics ◽  
2019 ◽  
Vol 7 (11) ◽  
pp. 1056 ◽  
Author(s):  
Feng ◽  
Yu ◽  
Wang

As a significant subset of the family of discrete optimization problems, the 0-1 knapsack problem (0-1 KP) has received considerable attention among the relevant researchers. The monarch butterfly optimization (MBO) is a recent metaheuristic algorithm inspired by the migration behavior of monarch butterflies. The original MBO is proposed to solve continuous optimization problems. This paper presents a novel monarch butterfly optimization with a global position updating operator (GMBO), which can address 0-1 KP known as an NP-complete problem. The global position updating operator is incorporated to help all the monarch butterflies rapidly move towards the global best position. Moreover, a dichotomy encoding scheme is adopted to represent monarch butterflies for solving 0-1 KP. In addition, a specific two-stage repair operator is used to repair the infeasible solutions and further optimize the feasible solutions. Finally, Orthogonal Design (OD) is employed in order to find the most suitable parameters. Two sets of low-dimensional 0-1 KP instances and three kinds of 15 high-dimensional 0-1 KP instances are used to verify the ability of the proposed GMBO. An extensive comparative study of GMBO with five classical and two state-of-the-art algorithms is carried out. The experimental results clearly indicate that GMBO can achieve better solutions on almost all the 0-1 KP instances and significantly outperforms the rest.


2017 ◽  
Vol 25 (1) ◽  
pp. 143-171 ◽  
Author(s):  
Ilya Loshchilov

Limited-memory BFGS (L-BFGS; Liu and Nocedal, 1989 ) is often considered to be the method of choice for continuous optimization when first- or second-order information is available. However, the use of L-BFGS can be complicated in a black box scenario where gradient information is not available and therefore should be numerically estimated. The accuracy of this estimation, obtained by finite difference methods, is often problem-dependent and may lead to premature convergence of the algorithm. This article demonstrates an alternative to L-BFGS, the limited memory covariance matrix adaptation evolution strategy (LM-CMA) proposed by Loshchilov ( 2014 ). LM-CMA is a stochastic derivative-free algorithm for numerical optimization of nonlinear, nonconvex optimization problems. Inspired by L-BFGS, LM-CMA samples candidate solutions according to a covariance matrix reproduced from m direction vectors selected during the optimization process. The decomposition of the covariance matrix into Cholesky factors allows reducing the memory complexity to [Formula: see text], where n is the number of decision variables. The time complexity of sampling one candidate solution is also [Formula: see text] but scales as only about 25 scalar-vector multiplications in practice. The algorithm has an important property of invariance with respect to strictly increasing transformations of the objective function; such transformations do not compromise its ability to approach the optimum. LM-CMA outperforms the original CMA-ES and its large-scale versions on nonseparable ill-conditioned problems with a factor increasing with problem dimension. Invariance properties of the algorithm do not prevent it from demonstrating a comparable performance to L-BFGS on nontrivial large-scale smooth and nonsmooth optimization problems.


2011 ◽  
Vol 19 (4) ◽  
pp. 525-560 ◽  
Author(s):  
Rajan Filomeno Coelho ◽  
Philippe Bouillard

This paper addresses continuous optimization problems with multiple objectives and parameter uncertainty defined by probability distributions. First, a reliability-based formulation is proposed, defining the nondeterministic Pareto set as the minimal solutions such that user-defined probabilities of nondominance and constraint satisfaction are guaranteed. The formulation can be incorporated with minor modifications in a multiobjective evolutionary algorithm (here: the nondominated sorting genetic algorithm-II). Then, in the perspective of applying the method to large-scale structural engineering problems—for which the computational effort devoted to the optimization algorithm itself is negligible in comparison with the simulation—the second part of the study is concerned with the need to reduce the number of function evaluations while avoiding modification of the simulation code. Therefore, nonintrusive stochastic metamodels are developed in two steps. First, for a given sampling of the deterministic variables, a preliminary decomposition of the random responses (objectives and constraints) is performed through polynomial chaos expansion (PCE), allowing a representation of the responses by a limited set of coefficients. Then, a metamodel is carried out by kriging interpolation of the PCE coefficients with respect to the deterministic variables. The method has been tested successfully on seven analytical test cases and on the 10-bar truss benchmark, demonstrating the potential of the proposed approach to provide reliability-based Pareto solutions at a reasonable computational cost.


Complexity ◽  
2020 ◽  
Vol 2020 ◽  
pp. 1-15
Author(s):  
Qiangqiang Jiang ◽  
Yuanjun Guo ◽  
Zhile Yang ◽  
Zheng Wang ◽  
Dongsheng Yang ◽  
...  

Whale optimization algorithm (WOA), known as a novel nature-inspired swarm optimization algorithm, demonstrates superiority in handling global continuous optimization problems. However, its performance deteriorates when applied to large-scale complex problems due to rapidly increasing execution time required for huge computational tasks. Based on interactions within the population, WOA is naturally amenable to parallelism, prompting an effective approach to mitigate the drawbacks of sequential WOA. In this paper, field programmable gate array (FPGA) is used as an accelerator, of which the high-level synthesis utilizes open computing language (OpenCL) as a general programming paradigm for heterogeneous System-on-Chip. With above platform, a novel parallel framework of WOA named PWOA is presented. The proposed framework comprises two feasible parallel models called partial parallel and all-FPGA parallel, respectively. Experiments are conducted by performing WOA on CPU and PWOA on OpenCL-based FPGA heterogeneous platform, to solve ten well-known benchmark functions. Meanwhile, other two classic algorithms including particle swarm optimization (PSO) and competitive swarm optimizer (CSO) are adopted for comparison. Numerical results show that the proposed approach achieves a promising computational performance coupled with efficient optimization on relatively large-scale complex problems.


Sign in / Sign up

Export Citation Format

Share Document