scholarly journals Accelerated Incremental Gradient Descent using Momentum Acceleration with Scaling Factor

Author(s):  
Yuanyuan Liu ◽  
Fanhua Shang ◽  
Licheng Jiao

Recently, research on variance reduced incremental gradient descent methods (e.g., SAGA) has made exciting progress (e.g., linear convergence for strongly convex (SC) problems). However, existing accelerated methods (e.g., point-SAGA) suffer from drawbacks such as inflexibility. In this paper, we design a novel and simple momentum to accelerate the classical SAGA algorithm, and propose a direct accelerated incremental gradient descent algorithm. In particular, our theoretical result shows that our algorithm attains a best known oracle complexity for strongly convex problems and an improved convergence rate for the case of n>=L/\mu. We also give experimental results justifying our theoretical results and showing the effectiveness of our algorithm.

Author(s):  
Jiaqi Zhang ◽  
Kai Zheng ◽  
Wenlong Mou ◽  
Liwei Wang

In this paper, we consider efficient differentially private empirical risk minimization from the viewpoint of optimization algorithms. For strongly convex and smooth objectives, we prove that gradient descent with output perturbation not only achieves nearly optimal utility, but also significantly improves the running time of previous state-of-the-art private optimization algorithms, for both $\epsilon$-DP and $(\epsilon, \delta)$-DP. For non-convex but smooth objectives, we propose an RRPSGD (Random Round Private Stochastic Gradient Descent) algorithm, which provably converges to a stationary point with privacy guarantee. Besides the expected utility bounds, we also provide guarantees in high probability form. Experiments demonstrate that our algorithm consistently outperforms existing method in both utility and running time.


Author(s):  
Marco Mele ◽  
Cosimo Magazzino ◽  
Nicolas Schneider ◽  
Floriana Nicolai

AbstractAlthough the literature on the relationship between economic growth and CO2 emissions is extensive, the use of machine learning (ML) tools remains seminal. In this paper, we assess this nexus for Italy using innovative algorithms, with yearly data for the 1960–2017 period. We develop three distinct models: the batch gradient descent (BGD), the stochastic gradient descent (SGD), and the multilayer perceptron (MLP). Despite the phase of low Italian economic growth, results reveal that CO2 emissions increased in the predicting model. Compared to the observed statistical data, the algorithm shows a correlation between low growth and higher CO2 increase, which contradicts the main strand of literature. Based on this outcome, adequate policy recommendations are provided.


Photonics ◽  
2021 ◽  
Vol 8 (5) ◽  
pp. 165
Author(s):  
Shiqing Ma ◽  
Ping Yang ◽  
Boheng Lai ◽  
Chunxuan Su ◽  
Wang Zhao ◽  
...  

For a high-power slab solid-state laser, obtaining high output power and high output beam quality are the most important indicators. Adaptive optics systems can significantly improve beam qualities by compensating for the phase distortions of the laser beams. In this paper, we developed an improved algorithm called Adaptive Gradient Estimation Stochastic Parallel Gradient Descent (AGESPGD) algorithm for beam cleanup of a solid-state laser. A second-order gradient of the search point was introduced to modify the gradient estimation, and it was introduced with the adaptive gain coefficient method into the classical Stochastic Parallel Gradient Descent (SPGD) algorithm. The improved algorithm accelerates the search for convergence and prevents it from falling into a local extremum. Simulation and experimental results show that this method reduces the number of iterations by 40%, and the algorithm stability is also improved compared with the original SPGD method.


Sign in / Sign up

Export Citation Format

Share Document