scholarly journals A Preconditioned Iterative Approach for Efficient Full Chip Thermal Analysis on Massively Parallel Platforms

Technologies ◽  
2018 ◽  
Vol 7 (1) ◽  
pp. 1
Author(s):  
George Floros ◽  
Konstantis Daloukas ◽  
Nestor Evmorfopoulos ◽  
George Stamoulis

Efficient full-chip thermal simulation is among the most challenging problems facing the EDA industry today, especially for modern 3D integrated circuits, due to the huge linear systems resulting from thermal modeling approaches that require unreasonably long computational times. While the formulation problem, by applying a thermal equivalent circuit, is prevalent and can be easily constructed, the corresponding 3D equations network has an undesirable time-consuming numerical simulation. Direct linear solvers are not capable of handling such huge problems, and iterative methods are the only feasible approach. In this paper, we propose a computationally-efficient iterative method with a parallel preconditioned technique that exploits the resources of massively-parallel architectures such as Graphic Processor Units (GPUs). Experimental results demonstrate that the proposed method achieves a speedup of 2.2× in CPU execution and a 26.93× speedup in GPU execution over the state-of-the-art iterative method.




2009 ◽  
Vol 87 (5-6) ◽  
pp. 342-354 ◽  
Author(s):  
Vladislav Ganine ◽  
Mathias Legrand ◽  
Hannah Michalska ◽  
Christophe Pierre




Author(s):  
Fatemeh Tavakkoli ◽  
Siavash Ebrahimi ◽  
Shujuan Wang ◽  
Kambiz Vafai


Author(s):  
Katarzyna Grzesiak-Kopeć ◽  
Leszek Nowak ◽  
Maciej Ogorzałek


2017 ◽  
pp. 245-259
Author(s):  
Sumeet S. Kumar ◽  
Amir Zjajo ◽  
Rene van Leuken


Author(s):  
Yuanqing Cheng ◽  
Aida Todri-Sanial ◽  
Alberto Bosio ◽  
Luigi Dilillo ◽  
Patrick Girard ◽  
...  


2020 ◽  
Vol 34 (04) ◽  
pp. 3858-3865
Author(s):  
Huijie Feng ◽  
Chunpeng Wu ◽  
Guoyang Chen ◽  
Weifeng Zhang ◽  
Yang Ning

Recently smoothing deep neural network based classifiers via isotropic Gaussian perturbation is shown to be an effective and scalable way to provide state-of-the-art probabilistic robustness guarantee against ℓ2 norm bounded adversarial perturbations. However, how to train a good base classifier that is accurate and robust when smoothed has not been fully investigated. In this work, we derive a new regularized risk, in which the regularizer can adaptively encourage the accuracy and robustness of the smoothed counterpart when training the base classifier. It is computationally efficient and can be implemented in parallel with other empirical defense methods. We discuss how to implement it under both standard (non-adversarial) and adversarial training scheme. At the same time, we also design a new certification algorithm, which can leverage the regularization effect to provide tighter robustness lower bound that holds with high probability. Our extensive experimentation demonstrates the effectiveness of the proposed training and certification approaches on CIFAR-10 and ImageNet datasets.



Sign in / Sign up

Export Citation Format

Share Document