scholarly journals Handling Disturbance and Awareness of Concurrent Updates in a Collaborative Editor

Author(s):  
Weihai Yu ◽  
Gérald Oster ◽  
Claudia-Lavinia Ignat
Keyword(s):  
Author(s):  
Vincenzo Auletta ◽  
Diodato Ferraioli ◽  
Francesco Pasquale ◽  
Paolo Penna ◽  
Giuseppe Persiano

2020 ◽  
Vol 13 (12) ◽  
pp. 3195-3203
Author(s):  
Raghavendra Thallam Kodandaramaih ◽  
Hanuma Kodavalla ◽  
Girish Mittur Venkataramanappa

2015 ◽  
Vol 50 (8) ◽  
pp. 21-30 ◽  
Author(s):  
Maya Arbel ◽  
Adam Morrison
Keyword(s):  

2017 ◽  
Vol 5 (2) ◽  
pp. 216-236 ◽  
Author(s):  
Jie Jiang ◽  
Lele Yu ◽  
Jiawei Jiang ◽  
Yuhong Liu ◽  
Bin Cui

Abstract Machine Learning (ML) techniques now are ubiquitous tools to extract structural information from data collections. With the increasing volume of data, large-scale ML applications require an efficient implementation to accelerate the performance. Existing systems parallelize algorithms through either data parallelism or model parallelism. But data parallelism cannot obtain good statistical efficiency due to the conflicting updates to parameters while the performance is damaged by global barriers in model parallel methods. In this paper, we propose a new system, named Angel, to facilitate the development of large-scale ML applications in production environment. By allowing concurrent updates to model across different groups and scheduling the updates in each group, Angel can achieve a good balance between hardware efficiency and statistical efficiency. Besides, Angel reduces the network latency by overlapping the parameter pulling and update computing and also utilizes the sparseness of data to avoid the pulling of unnecessary parameters. We also enhance the usability of Angel by providing a set of efficient tools to integrate with application pipelines and provisioning efficient fault tolerance mechanisms. We conduct extensive experiments to demonstrate the superiority of Angel.


Sign in / Sign up

Export Citation Format

Share Document