A Research on Optimum-Searching Quadratic Optimization for Very Large-Scale Standard Cell Placement

Author(s):  
Yongqiang Lu ◽  
Xianlong Hong ◽  
Qiang Zhou ◽  
Yici Cai ◽  
Zhuoyuan Li
2018 ◽  
Vol 27 (08) ◽  
pp. 1850122
Author(s):  
Sameer Pawanekar ◽  
Kalpesh Kapoor ◽  
Gaurav Trivedi

We present an analytical approach that is based on nonlinear programming to perform VLSI standard cell placement. Our method first clusters a netlist to reduce the number of cells and then performs quadratic optimization on the reduced netlist. Finally, it uses Nesterov’s method for solving nonlinear equations for the problem. The framework of our tool, Kapees3, is scalable and generates high quality results. The experimental results for Peko Suite 1 and Peko Suite 2 benchmarks show promising improvements. Our placement tool outperforms NTUPlace3, Dragon, Feng Shui, Capo10.5, by 46%, 57%, 48% and 25%, respectively, on PEKO Suite 1. For PEKO Suite 2, our placement tool outperforms NTUPlace3, Dragon, Feng Shui, Capo10.5 and mPL6 by 30%, 47%, 57%, 69% and 2.7%, respectively. On MMS benchmarks, we obtain wirelength improvement over Capo10.5 by 56.62%, FLOP by 7.84%, FastPlace by 11.55%, ComPLx by 4.58%, POLAR by 23.67%, mPL6 by 9.96% and NTUPlace3-Unified by 2.96%.


Author(s):  
Xiaojian Yang ◽  
Elaheh Bozorgzadeh ◽  
Majid Sarrafzadeh ◽  
Maogang Wang

Technologies ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 3
Author(s):  
Panagiotis Oikonomou ◽  
Antonios Dadaliaris ◽  
Kostas Kolomvatsos ◽  
Thanasis Loukopoulos ◽  
Athanasios Kakarountas ◽  
...  

In standard cell placement, a circuit is given consisting of cells with a standard height, (different widths) and the problem is to place the cells in the standard rows of a chip area so that no overlaps occur and some target function is optimized. The process is usually split into at least two phases. In a first pass, a global placement algorithm distributes the cells across the circuit area, while in the second step, a legalization algorithm aligns the cells to the standard rows of the power grid and alleviates any overlaps. While a few legalization schemes have been proposed in the past for the basic problem formulation, few obstacle-aware extensions exist. Furthermore, they usually provide extreme trade-offs between time performance and optimization efficiency. In this paper, we focus on the legalization step, in the presence of pre-allocated modules acting as obstacles. We extend two known algorithmic approaches, namely Tetris and Abacus, so that they become obstacle-aware. Furthermore, we propose a parallelization scheme to tackle the computational complexity. The experiments illustrate that the proposed parallelization method achieves a good scalability, while it also efficiently prunes the search space resulting in a superlinear speedup. Furthermore, this time performance comes at only a small cost (sometimes even improvement) concerning the typical optimization metrics.


Author(s):  
Krešimir Mihić ◽  
Mingxi Zhu ◽  
Yinyu Ye

Abstract The Alternating Direction Method of Multipliers (ADMM) has gained a lot of attention for solving large-scale and objective-separable constrained optimization. However, the two-block variable structure of the ADMM still limits the practical computational efficiency of the method, because one big matrix factorization is needed at least once even for linear and convex quadratic programming. This drawback may be overcome by enforcing a multi-block structure of the decision variables in the original optimization problem. Unfortunately, the multi-block ADMM, with more than two blocks, is not guaranteed to be convergent. On the other hand, two positive developments have been made: first, if in each cyclic loop one randomly permutes the updating order of the multiple blocks, then the method converges in expectation for solving any system of linear equations with any number of blocks. Secondly, such a randomly permuted ADMM also works for equality-constrained convex quadratic programming even when the objective function is not separable. The goal of this paper is twofold. First, we add more randomness into the ADMM by developing a randomly assembled cyclic ADMM (RAC-ADMM) where the decision variables in each block are randomly assembled. We discuss the theoretical properties of RAC-ADMM and show when random assembling helps and when it hurts, and develop a criterion to guarantee that it converges almost surely. Secondly, using the theoretical guidance on RAC-ADMM, we conduct multiple numerical tests on solving both randomly generated and large-scale benchmark quadratic optimization problems, which include continuous, and binary graph-partition and quadratic assignment, and selected machine learning problems. Our numerical tests show that the RAC-ADMM, with a variable-grouping strategy, could significantly improve the computation efficiency on solving most quadratic optimization problems.


Sign in / Sign up

Export Citation Format

Share Document