global linear convergence
Recently Published Documents


TOTAL DOCUMENTS

13
(FIVE YEARS 4)

H-INDEX

5
(FIVE YEARS 1)

2020 ◽  
Vol 2020 ◽  
pp. 1-7
Author(s):  
Jing-Mei Feng ◽  
San-Yang Liu

In this paper, we transform the problem of solving the absolute value equations (AVEs) Ax−x=b with singular values of A greater than 1 into the problem of finding the root of the system of nonlinear equation and propose a three-step algorithm for solving the system of nonlinear equation. The proposed method has the global linear convergence and the local quadratic convergence. Numerical examples show that this algorithm has high accuracy and fast convergence speed for solving the system of nonlinear equations.


2019 ◽  
Vol 109 (4) ◽  
pp. 813-852
Author(s):  
Ching-pei Lee ◽  
Kai-Wei Chang

AbstractIn recent years, there is a growing need to train machine learning models on a huge volume of data. Therefore, designing efficient distributed optimization algorithms for empirical risk minimization (ERM) has become an active and challenging research topic. In this paper, we propose a flexible framework for distributed ERM training through solving the dual problem, which provides a unified description and comparison of existing methods. Our approach requires only approximate solutions of the sub-problems involved in the optimization process, and is versatile to be applied on many large-scale machine learning problems including classification, regression, and structured prediction. We show that our framework enjoys global linear convergence for a broad class of non-strongly-convex problems, and some specific choices of the sub-problems can even achieve much faster convergence than existing approaches by a refined analysis. This improved convergence rate is also reflected in the superior empirical performance of our method.


2018 ◽  
Vol 30 (09) ◽  
pp. 1850014 ◽  
Author(s):  
Rajendra Bhatia ◽  
Tanvi Jain ◽  
Yongdo Lim

We present several theorems on strict and strong convexity, and higher order differential formulae for sandwiched quasi-relative entropy (a parametrized version of the classical fidelity). These are crucial for establishing global linear convergence of the gradient projection algorithm for optimization problems for these functions. The case of the classical fidelity is of special interest for the multimarginal optimal transport problem (the [Formula: see text]-coupling problem) for Gaussian measures.


2015 ◽  
Vol 25 (3) ◽  
pp. 1478-1497 ◽  
Author(s):  
Tianyi Lin ◽  
Shiqian Ma ◽  
Shuzhong Zhang

Sign in / Sign up

Export Citation Format

Share Document