scholarly journals Mary, Hugo, and Hugo*: Learning to schedule distributed data‐parallel processing jobs on shared clusters

Author(s):  
Lauritz Thamsen ◽  
Jossekin Beilharz ◽  
Vinh Thuy Tran ◽  
Sasho Nedelkoski ◽  
Odej Kao
Author(s):  
Thomas Benz ◽  
Luca Bertaccini ◽  
Florian Zaruba ◽  
Fabian Schuiki ◽  
Frank K. Gurkaynak ◽  
...  

2020 ◽  
Vol 34 (04) ◽  
pp. 3817-3824
Author(s):  
Aritra Dutta ◽  
El Houcine Bergou ◽  
Ahmed M. Abdelmoniem ◽  
Chen-Yu Ho ◽  
Atal Narayan Sahu ◽  
...  

Compressed communication, in the form of sparsification or quantization of stochastic gradients, is employed to reduce communication costs in distributed data-parallel training of deep neural networks. However, there exists a discrepancy between theory and practice: while theoretical analysis of most existing compression methods assumes compression is applied to the gradients of the entire model, many practical implementations operate individually on the gradients of each layer of the model.In this paper, we prove that layer-wise compression is, in theory, better, because the convergence rate is upper bounded by that of entire-model compression for a wide range of biased and unbiased compression methods. However, despite the theoretical bound, our experimental study of six well-known methods shows that convergence, in practice, may or may not be better, depending on the actual trained model and compression ratio. Our findings suggest that it would be advantageous for deep learning frameworks to include support for both layer-wise and entire-model compression.


1997 ◽  
Vol 95 (1) ◽  
pp. 13 ◽  
Author(s):  
Martin Schütz ◽  
Roland Lindh

Author(s):  
Yu Wu ◽  
Qi Zhang ◽  
Zhiqiang Yu ◽  
Jianhui Li

XML is playing crucial roles in web services, databases, and document representing and processing. However, the processing of XML document has been regarded as the main performance bottleneck especially for the processing of very large XML data. On the other hand, multi-core processing gains increasingly popularity both on the desktop computers and server computing machines. To take full advantage of multi-cores, we present a novel hybrid parallel XML processing model, which combines data-parallel and pipeline processing. It first partitions the XML by chunks to perform data parallel processing for both XML parsing and schema validation, then organize and execute them as a two stage pipeline to exploit more parallelism. The hybrid parallel XML processing model has shown great overall performance advantage on multi-core platform as indicated by the experiment performance results.


Sign in / Sign up

Export Citation Format

Share Document