Large Scale Plant Optimization Problems

Author(s):  
Paul Cronin ◽  
Harry Woerde ◽  
Rob Vasbinder
1999 ◽  
Vol 9 (3) ◽  
pp. 755-778 ◽  
Author(s):  
Paul T. Boggs ◽  
Anthony J. Kearsley ◽  
Jon W. Tolle

Author(s):  
Lu Chen ◽  
Handing Wang ◽  
Wenping Ma

AbstractReal-world optimization applications in complex systems always contain multiple factors to be optimized, which can be formulated as multi-objective optimization problems. These problems have been solved by many evolutionary algorithms like MOEA/D, NSGA-III, and KnEA. However, when the numbers of decision variables and objectives increase, the computation costs of those mentioned algorithms will be unaffordable. To reduce such high computation cost on large-scale many-objective optimization problems, we proposed a two-stage framework. The first stage of the proposed algorithm combines with a multi-tasking optimization strategy and a bi-directional search strategy, where the original problem is reformulated as a multi-tasking optimization problem in the decision space to enhance the convergence. To improve the diversity, in the second stage, the proposed algorithm applies multi-tasking optimization to a number of sub-problems based on reference points in the objective space. In this paper, to show the effectiveness of the proposed algorithm, we test the algorithm on the DTLZ and LSMOP problems and compare it with existing algorithms, and it outperforms other compared algorithms in most cases and shows disadvantage on both convergence and diversity.


Nanophotonics ◽  
2020 ◽  
Vol 9 (13) ◽  
pp. 4193-4198 ◽  
Author(s):  
Midya Parto ◽  
William E. Hayenga ◽  
Alireza Marandi ◽  
Demetrios N. Christodoulides ◽  
Mercedeh Khajavikhan

AbstractFinding the solution to a large category of optimization problems, known as the NP-hard class, requires an exponentially increasing solution time using conventional computers. Lately, there has been intense efforts to develop alternative computational methods capable of addressing such tasks. In this regard, spin Hamiltonians, which originally arose in describing exchange interactions in magnetic materials, have recently been pursued as a powerful computational tool. Along these lines, it has been shown that solving NP-hard problems can be effectively mapped into finding the ground state of certain types of classical spin models. Here, we show that arrays of metallic nanolasers provide an ultra-compact, on-chip platform capable of implementing spin models, including the classical Ising and XY Hamiltonians. Various regimes of behavior including ferromagnetic, antiferromagnetic, as well as geometric frustration are observed in these structures. Our work paves the way towards nanoscale spin-emulators that enable efficient modeling of large-scale complex networks.


Algorithms ◽  
2021 ◽  
Vol 14 (5) ◽  
pp. 146
Author(s):  
Aleksei Vakhnin ◽  
Evgenii Sopov

Modern real-valued optimization problems are complex and high-dimensional, and they are known as “large-scale global optimization (LSGO)” problems. Classic evolutionary algorithms (EAs) perform poorly on this class of problems because of the curse of dimensionality. Cooperative Coevolution (CC) is a high-performed framework for performing the decomposition of large-scale problems into smaller and easier subproblems by grouping objective variables. The efficiency of CC strongly depends on the size of groups and the grouping approach. In this study, an improved CC (iCC) approach for solving LSGO problems has been proposed and investigated. iCC changes the number of variables in subcomponents dynamically during the optimization process. The SHADE algorithm is used as a subcomponent optimizer. We have investigated the performance of iCC-SHADE and CC-SHADE on fifteen problems from the LSGO CEC’13 benchmark set provided by the IEEE Congress of Evolutionary Computation. The results of numerical experiments have shown that iCC-SHADE outperforms, on average, CC-SHADE with a fixed number of subcomponents. Also, we have compared iCC-SHADE with some state-of-the-art LSGO metaheuristics. The experimental results have shown that the proposed algorithm is competitive with other efficient metaheuristics.


1995 ◽  
Vol 117 (1) ◽  
pp. 155-157 ◽  
Author(s):  
F. C. Anderson ◽  
J. M. Ziegler ◽  
M. G. Pandy ◽  
R. T. Whalen

We have examined the feasibility of using massively-parallel and vector-processing supercomputers to solve large-scale optimization problems for human movement. Specifically, we compared the computational expense of determining the optimal controls for the single support phase of gait using a conventional serial machine (SGI Iris 4D25), a MIMD parallel machine (Intel iPSC/860), and a parallel-vector-processing machine (Cray Y-MP 8/864). With the human body modeled as a 14 degree-of-freedom linkage actuated by 46 musculotendinous units, computation of the optimal controls for gait could take up to 3 months of CPU time on the Iris. Both the Cray and the Intel are able to reduce this time to practical levels. The optimal solution for gait can be found with about 77 hours of CPU on the Cray and with about 88 hours of CPU on the Intel. Although the overall speeds of the Cray and the Intel were found to be similar, the unique capabilities of each machine are better suited to different portions of the computational algorithm used. The Intel was best suited to computing the derivatives of the performance criterion and the constraints whereas the Cray was best suited to parameter optimization of the controls. These results suggest that the ideal computer architecture for solving very large-scale optimal control problems is a hybrid system in which a vector-processing machine is integrated into the communication network of a MIMD parallel machine.


2014 ◽  
Vol 2014 ◽  
pp. 1-7
Author(s):  
Min Sun ◽  
Jing Liu

Recently, Zhang et al. proposed a sufficient descent Polak-Ribière-Polyak (SDPRP) conjugate gradient method for large-scale unconstrained optimization problems and proved its global convergence in the sense thatlim infk→∞∥∇f(xk)∥=0when an Armijo-type line search is used. In this paper, motivated by the line searches proposed by Shi et al. and Zhang et al., we propose two new Armijo-type line searches and show that the SDPRP method has strong convergence in the sense thatlimk→∞∥∇f(xk)∥=0under the two new line searches. Numerical results are reported to show the efficiency of the SDPRP with the new Armijo-type line searches in practical computation.


Author(s):  
Alexander Murray ◽  
Timm Faulwasser ◽  
Veit Hagenmeyer ◽  
Mario E. Villanueva ◽  
Boris Houska

AbstractThis paper presents a novel partially distributed outer approximation algorithm, named PaDOA, for solving a class of structured mixed integer convex programming problems to global optimality. The proposed scheme uses an iterative outer approximation method for coupled mixed integer optimization problems with separable convex objective functions, affine coupling constraints, and compact domain. PaDOA proceeds by alternating between solving large-scale structured mixed-integer linear programming problems and partially decoupled mixed-integer nonlinear programming subproblems that comprise much fewer integer variables. We establish conditions under which PaDOA converges to global minimizers after a finite number of iterations and verify these properties with an application to thermostatically controlled loads and to mixed-integer regression.


Sign in / Sign up

Export Citation Format

Share Document