scholarly journals The Livermore Brain: Massive Deep Learning Networks Enabled by High Performance Computing

2016 ◽  
Author(s):  
Barry Y. Chen
2020 ◽  
Vol 39 (4) ◽  
pp. 780-807
Author(s):  
Mara Graziani ◽  
Ivan Eggel ◽  
François Deligand ◽  
Martin Bobák ◽  
Vincent Andrearczyk ◽  
...  

2021 ◽  
Vol 2062 (1) ◽  
pp. 012016
Author(s):  
Sunil Pandey ◽  
Naresh Kumar Nagwani ◽  
Shrish Verma

Abstract The training of deep learning convolutional neural networks is extremely compute intensive and takes long times for completion, on all except small datasets. This is a major limitation inhibiting the widespread adoption of convolutional neural networks in real world applications despite their better image classification performance in comparison with other techniques. Multidirectional research and development efforts are therefore being pursued with the objective of boosting the computational performance of convolutional neural networks. Development of parallel and scalable deep learning convolutional neural network implementations for multisystem high performance computing architectures is important in this background. Prior analysis based on computational experiments indicates that a combination of pipeline and task parallelism results in significant convolutional neural network performance gains of up to 18 times. This paper discusses the aspects which are important from the perspective of implementation of parallel and scalable convolutional neural networks on central processing unit based multisystem high performance computing architectures including computational pipelines, convolutional neural networks, convolutional neural network pipelines, multisystem high performance computing architectures and parallel programming models.


Sign in / Sign up

Export Citation Format

Share Document