Min-cut placement with global objective functions for large scale sea-of-gates arrays

Author(s):  
K. Takahashi ◽  
K. Nakajima ◽  
M. Terai ◽  
K. Sato
2018 ◽  
Vol 26 (4) ◽  
pp. 569-596 ◽  
Author(s):  
Yuping Wang ◽  
Haiyan Liu ◽  
Fei Wei ◽  
Tingting Zong ◽  
Xiaodong Li

For a large-scale global optimization (LSGO) problem, divide-and-conquer is usually considered an effective strategy to decompose the problem into smaller subproblems, each of which can then be solved individually. Among these decomposition methods, variable grouping is shown to be promising in recent years. Existing variable grouping methods usually assume the problem to be black-box (i.e., assuming that an analytical model of the objective function is unknown), and they attempt to learn appropriate variable grouping that would allow for a better decomposition of the problem. In such cases, these variable grouping methods do not make a direct use of the formula of the objective function. However, it can be argued that many real-world problems are white-box problems, that is, the formulas of objective functions are often known a priori. These formulas of the objective functions provide rich information which can then be used to design an effective variable group method. In this article, a formula-based grouping strategy (FBG) for white-box problems is first proposed. It groups variables directly via the formula of an objective function which usually consists of a finite number of operations (i.e., four arithmetic operations “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]”, “[Formula: see text]” and composite operations of basic elementary functions). In FBG, the operations are classified into two classes: one resulting in nonseparable variables, and the other resulting in separable variables. In FBG, variables can be automatically grouped into a suitable number of non-interacting subcomponents, with variables in each subcomponent being interdependent. FBG can easily be applied to any white-box problem and can be integrated into a cooperative coevolution framework. Based on FBG, a novel cooperative coevolution algorithm with formula-based variable grouping (so-called CCF) is proposed in this article for decomposing a large-scale white-box problem into several smaller subproblems and optimizing them respectively. To further enhance the efficiency of CCF, a new local search scheme is designed to improve the solution quality. To verify the efficiency of CCF, experiments are conducted on the standard LSGO benchmark suites of CEC'2008, CEC'2010, CEC'2013, and a real-world problem. Our results suggest that the performance of CCF is very competitive when compared with those of the state-of-the-art LSGO algorithms.


2017 ◽  
Vol 2017 ◽  
pp. 1-13 ◽  
Author(s):  
Danwen Bao ◽  
Jiayu Gu ◽  
Junhua Jia

This paper establishes a bilevel planning model with one master and multiple slaves to solve traffic evacuation problems. The minimum evacuation network saturation and shortest evacuation time are used as the objective functions for the upper- and lower-level models, respectively. The optimizing conditions of this model are also analyzed. An improved particle swarm optimization (PSO) method is proposed by introducing an electromagnetism-like mechanism to solve the bilevel model and enhance its convergence efficiency. A case study is carried out using the Nanjing Olympic Sports Center. The results indicate that, for large-scale activities, the average evacuation time of the classic model is shorter but the road saturation distribution is more uneven. Thus, the overall evacuation efficiency of the network is not high. For induced emergencies, the evacuation time of the bilevel planning model is shortened. When the audience arrival rate is increased from 50% to 100%, the evacuation time is shortened from 22% to 35%, indicating that the optimization effect of the bilevel planning model is more effective compared to the classic model. Therefore, the model and algorithm presented in this paper can provide a theoretical basis for the traffic-induced evacuation decision making of large-scale activities.


Author(s):  
Ashwin P. Gurnani ◽  
Kemper Lewis

The design of large scale complex engineering systems requires interaction and communication between multiple disciplines and decentralized subsystems. One common fundamental assumption in decentralized design is that the individual subsystems only exchange design variable information and do not share objective functions or gradients. This is because the decentralized subsystems can either not share this information due to geographical constraints or choose not to share it due to corporate secrecy issues. Game theory has been used to model the interactions between distributed design subsystems and predict convergence and equilibrium solutions. These game theoretic models assume that designers make perfectly rational decisions by selecting solutions from their Rational Reaction Set (RRS), resulting in a Nash Equilibrium solution. However, empirical studies reject the claim that decision makers always make rational choices and the concept of Bounded Rationality is used to explain such behavior. In this paper, a framework is proposed that uses the idea of bounded rationality in conjunction with set-based design, metamodeling and multiobjective optimization techniques to improve solutions for convergent decentralized design problems. Through the use of this framework, entitled Modified Approximation-based Decentralized Design (MADD) framework, convergent decentralized design problems converge to solutions that are superior to the Nash equilibrium. A two subsystem mathematical problem is used as case study and simulation techniques are used to study the impact of the framework parameters on the final solution. The discipline specific objective functions within the case study problem are unconstrained and continuous — however, the implementation of the MADD framework is not restricted to such problems.


Complexity ◽  
2019 ◽  
Vol 2019 ◽  
pp. 1-15
Author(s):  
Bo Wang ◽  
Yanjing Li ◽  
Fei Yang ◽  
Xiaohua Xia

A technoeconomic optimization problem for a domestic grid-connected PV-battery hybrid energy system is investigated. It incorporates the appliance time scheduling with appliance-specific power dispatch. The optimization is aimed at minimizing energy cost, maximizing renewable energy penetration, and increasing user satisfaction over a finite horizon. Nonlinear objective functions and constraints, as well as discrete and continuous decision variables, are involved. To solve the proposed mixed-integer nonlinear programming problem at a large scale, a competitive swarm optimizer-based numerical solver is designed and employed. The effectiveness of the proposed approach is verified by simulation results.


Geophysics ◽  
1993 ◽  
Vol 58 (11) ◽  
pp. 1621-1628 ◽  
Author(s):  
Rune Mittet ◽  
Tom Houlder

Seismic data have been reported to carry information on both small scale and large scale medium variations, but not for intermediate size objects. This is a paradox compared to many other experiments performed with probes of wave nature, where objects of size of the smallest wavelength or larger can be resolved. The sensitivity of reflected and transmitted seismic data to medium perturbations of varying sizes is investigated. The differences between data generated in a reference model and data generated in a perturbed model are measured. Both [Formula: see text] and [Formula: see text] type objective functions are used. The kernels of the objective functions consist of either stress or particle‐velocity field components. Several experimental configurations and the sensitivity to various ways of performing the medium perturbations are analyzed. For all perturbation types that change the impedances, we find a resonant behavior in the objective functions for perturbations of size of the typical wavelength of the source. For the experiments where impedances are kept fixed, we do not find this resonance, but there is a significant contribution to the objective function for all perturbation sizes larger then the shortest wavelength. That is, seismic data are sensitive to objects of size of the smallest wavelength or larger.


Algorithms ◽  
2020 ◽  
Vol 13 (5) ◽  
pp. 108
Author(s):  
Alexey Vakhnin ◽  
Evgenii Sopov

Many modern real-valued optimization tasks use “black-box” (BB) models for evaluating objective functions and they are high-dimensional and constrained. Using common classifications, we can identify them as constrained large-scale global optimization (cLSGO) tasks. Today, the IEEE Congress of Evolutionary Computation provides a special session and several benchmarks for LSGO. At the same time, cLSGO problems are not well studied yet. The majority of modern optimization techniques demonstrate insufficient performance when confronted with cLSGO tasks. The effectiveness of evolution algorithms (EAs) in solving constrained low-dimensional optimization problems has been proven in many scientific papers and studies. Moreover, the cooperative coevolution (CC) framework has been successfully applied for EA used to solve LSGO problems. In this paper, a new approach for solving cLSGO has been proposed. This approach is based on CC and a method that increases the size of groups of variables at the decomposition stage (iCC) when solving cLSGO tasks. A new algorithm has been proposed, which combined the success-history based parameter adaptation for differential evolution (SHADE) optimizer, iCC, and the ε-constrained method (namely ε-iCC-SHADE). We investigated the performance of the ε-iCC-SHADE and compared it with the previously proposed ε-CC-SHADE algorithm on scalable problems from the IEEE CEC 2017 Competition on constrained real-parameter optimization.


VLSI Design ◽  
1996 ◽  
Vol 5 (1) ◽  
pp. 37-48 ◽  
Author(s):  
Youssef Saab

Placement is an important constrained optimization problem in the design of very large scale (VLSI) integrated circuits [1–4]. Simulated annealing [5] and min-cut placement [6] are two of the most successful approaches to the placement problem. Min-cut methods yield less congested and more routable placements at the expense of more wire-length, while simulated annealing methods tend to optimize more the total wire-length with little emphasis on the minimization of congestion. It is also well known that min-cut algorithms are substantially faster than simulated-annealing-based methods. In this paper, a fast min-cut algorithm (ROW-PLACE) for row-based placement is presented and is empirically shown to achieve simulated-annealing-quality wire-length on a number of benchmark circuits. In comparison with Timberwolf 6 [7], ROW-PLACE is at least 12 times faster in its normal mode and is at least 25 times faster in its faster mode. The good results of ROW-PLACE are achieved using a very effective clustering-based partitioning algorithm in combination with constructive methods that reduce the wire-length of nets involved in terminal propagation.


Author(s):  
Chao Qian ◽  
Yang Yu ◽  
Ke Tang

Subset selection is a fundamental problem in many areas, which aims to select the best subset of size at most $k$ from a universe. Greedy algorithms are widely used for subset selection, and have shown good approximation performances in deterministic situations. However, their behaviors are stochastic in many realistic situations (e.g., large-scale and noisy). For general stochastic greedy algorithms, bounded approximation guarantees were obtained only for subset selection with monotone submodular objective functions, while real-world applications often involve non-monotone or non-submodular objective functions and can be subject to a more general constraint than a size constraint. This work proves their approximation guarantees in these cases, and thus largely extends the applicability of stochastic greedy algorithms.


2015 ◽  
Vol 17 (4) ◽  
pp. 534-550
Author(s):  
Mohammadamin Jahanpour ◽  
Abbas Afshar ◽  
Samuel Sandoval Solis

Cyclic storage system (CSS) is defined as physically interconnected and operationally integrated surface water and groundwater subsystems with full direct interactions between the subsystems. Mathematical development and implementation of a CSS model is very complex and all previous works are fully case dependent with a minimum possibility of generalization. This article proposes an integrated development environment called CSSDev, which assists researchers to create and design object-oriented CSS models more easily. Using CSSDev, researchers may skip regeneration of repetitive simulation codes for common elements of a CSS. CSSDev employs NSGA-II to optimally select the design parameters of the models. Two objective functions of the optimization problem are system's total costs and total loss associated with the development alternatives. A real-world large-scale CSS has been modeled and optimized to illustrate the performance of CSSDev. The final Pareto-front is presented and two selected solutions from the set of optimal non-dominated ones are evaluated and discussed.


Sign in / Sign up

Export Citation Format

Share Document