An algorithmic theory of learning: robust concepts and random projection

Author(s):  
R.I. Arriaga ◽  
S. Vempala
Keyword(s):  
Author(s):  
Morteza Heidari ◽  
Sivaramakrishnan Lakshmivarahan ◽  
Seyedehnafiseh Mirniaharikandehei ◽  
Gopichandh Danala ◽  
Sai Kiran R. Maryada ◽  
...  

2012 ◽  
Vol 23 (7-8) ◽  
pp. 2281-2293
Author(s):  
Zhenwei Shi ◽  
Liu Liu ◽  
Xinya Zhai ◽  
Zhiguo Jiang

2017 ◽  
Vol 2017 ◽  
pp. 1-13
Author(s):  
Junlong Zhu ◽  
Ping Xie ◽  
Qingtao Wu ◽  
Mingchuan Zhang ◽  
Ruijuan Zheng ◽  
...  

We consider a distributed constrained optimization problem over a time-varying network, where each agent only knows its own cost functions and its constraint set. However, the local constraint set may not be known in advance or consists of huge number of components in some applications. To deal with such cases, we propose a distributed stochastic subgradient algorithm over time-varying networks, where the estimate of each agent projects onto its constraint set by using random projection technique and the implement of information exchange between agents by employing asynchronous broadcast communication protocol. We show that our proposed algorithm is convergent with probability 1 by choosing suitable learning rate. For constant learning rate, we obtain an error bound, which is defined as the expected distance between the estimates of agent and the optimal solution. We also establish an asymptotic upper bound between the global objective function value at the average of the estimates and the optimal value.


Mathematics ◽  
2021 ◽  
Vol 9 (21) ◽  
pp. 2803
Author(s):  
Sudam Surasinghe ◽  
Erik Bollt

A data-driven analysis method known as dynamic mode decomposition (DMD) approximates the linear Koopman operator on a projected space. In the spirit of Johnson–Lindenstrauss lemma, we will use a random projection to estimate the DMD modes in a reduced dimensional space. In practical applications, snapshots are in a high-dimensional observable space and the DMD operator matrix is massive. Hence, computing DMD with the full spectrum is expensive, so our main computational goal is to estimate the eigenvalue and eigenvectors of the DMD operator in a projected domain. We generalize the current algorithm to estimate a projected DMD operator. We focus on a powerful and simple random projection algorithm that will reduce the computational and storage costs. While, clearly, a random projection simplifies the algorithmic complexity of a detailed optimal projection, as we will show, the results can generally be excellent, nonetheless, and the quality could be understood through a well-developed theory of random projections. We will demonstrate that modes could be calculated for a low cost by the projected data with sufficient dimension.


Sign in / Sign up

Export Citation Format

Share Document