An extended basis inexact shift–invert Lanczos for the efficient solution of large-scale generalized eigenproblems

2013 ◽  
Vol 184 (9) ◽  
pp. 2127-2135 ◽  
Author(s):  
M. Rewieński ◽  
A. Lamecki ◽  
M. Mrozowski
2012 ◽  
Vol 45 (15) ◽  
pp. 892-897
Author(s):  
Arul Sundaramoorthy ◽  
Xiang Li ◽  
James M.B. Evans ◽  
Paul I. Barton

2006 ◽  
Author(s):  
Fernando E. Ortiz ◽  
Michael R. Bodnar ◽  
James P. Durbano ◽  
Eric J. Kelmelis

Author(s):  
Qi Xia ◽  
Zeyi Tao ◽  
Zijiang Hao ◽  
Qun Li

Many times, training a large scale deep learning neural network on a single machine becomes more and more difficult for a complex network model. Distributed training provides an efficient solution, but Byzantine attacks may occur on participating workers. They may be compromised or suffer from hardware failures. If they upload poisonous gradients, the training will become unstable or even converge to a saddle point. In this paper, we propose FABA, a Fast Aggregation algorithm against Byzantine Attacks, which removes the outliers in the uploaded gradients and obtains gradients that are close to the true gradients. We show the convergence of our algorithm. The experiments demonstrate that our algorithm can achieve similar performance to non-Byzantine case and higher efficiency as compared to previous algorithms.


Sign in / Sign up

Export Citation Format

Share Document