An Efficient First-Order Scheme for Reliability Based Optimization of Stochastic Systems

Author(s):  
Hector A. Jensen ◽  
Gerhart I. Schuëller ◽  
Marcos A. Valdebenito ◽  
Danilo S. Kusanovic
2013 ◽  
Vol 194 (3) ◽  
pp. 1473-1485 ◽  
Author(s):  
Guihua Long ◽  
Yubo Zhao ◽  
Jun Zou

1982 ◽  
Vol 10 (3-4) ◽  
pp. 283-294 ◽  
Author(s):  
A. Hadjidimos ◽  
A. Yeyios

Author(s):  
Sathya N. Ravi ◽  
Tuan Dinh ◽  
Vishnu Suresh Lokhande ◽  
Vikas Singh

A number of results have recently demonstrated the benefits of incorporating various constraints when training deep architectures in vision and machine learning. The advantages range from guarantees for statistical generalization to better accuracy to compression. But support for general constraints within widely used libraries remains scarce and their broader deployment within many applications that can benefit from them remains under-explored. Part of the reason is that Stochastic gradient descent (SGD), the workhorse for training deep neural networks, does not natively deal with constraints with global scope very well. In this paper, we revisit a classical first order scheme from numerical optimization, Conditional Gradients (CG), that has, thus far had limited applicability in training deep models. We show via rigorous analysis how various constraints can be naturally handled by modifications of this algorithm. We provide convergence guarantees and show a suite of immediate benefits that are possible — from training ResNets with fewer layers but better accuracy simply by substituting in our version of CG to faster training of GANs with 50% fewer epochs in image inpainting applications to provably better generalization guarantees using efficiently implementable forms of recently proposed regularizers.


Sign in / Sign up

Export Citation Format

Share Document