frequent directions
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 4)

H-INDEX

3
(FIVE YEARS 1)

2020 ◽  
Author(s):  
Qianli Liao

We consider the task of matrix sketching, which is obtaining a significantly smaller representation of matrix A while retaining most of its information (or in other words, approximates A well). In particular, we investigate a recent approach called Frequent Directions (FD) initially proposed by Liberty [5] in 2013, which has drawn wide attention due to its elegancy, nice theoretical guarantees and outstanding performance in practice. Two follow-up papers [3] and [2] in 2014 further refined the theoretical bounds as well as improved the practical performance. In this report, we summarize the three papers and propose a Generalized Frequent Directions (GFD) algorithm for matrix sketching, which captures all the previous FD algorithms as special cases without losing any of the theoretical bounds. Interestingly, our additive error bound seems to apply to the previously non-guaranteed well-performing heuristic iSVD.


Author(s):  
Yuanyu Wan ◽  
Nan Wei ◽  
Lijun Zhang

By employing time-varying proximal functions, adaptive subgradient methods (ADAGRAD) have improved the regret bound and been widely used in online learning and optimization. However, ADAGRAD with full matrix proximal functions (ADA-FULL) cannot deal with large-scale problems due to the impractical time and space complexities, though it has better performance when gradients are correlated. In this paper, we propose ADA-FD, an efficient variant of ADA-FULL based on a deterministic matrix sketching technique called frequent directions. Following ADA-FULL, we incorporate our ADA-FD into both primal-dual subgradient method and composite mirror descent method to develop two efficient methods. By maintaining and manipulating low-rank matrices, at each iteration, the space complexity is reduced from $O(d^2)$ to $O(\tau d)$ and the time complexity is reduced from $O(d^3)$ to $O(\tau^2d)$, where $d$ is the dimensionality of the data and $\tau \ll d$ is the sketching size. Theoretical analysis reveals that the regret of our methods is close to that of ADA-FULL as long as the outer product matrix of gradients is approximately low-rank. Experimental results show that our ADA-FD is comparable to ADA-FULL and outperforms other state-of-the-art algorithms in online convex optimization as well as in training convolutional neural networks (CNN).


2017 ◽  
Vol 32 (2) ◽  
pp. 453-482 ◽  
Author(s):  
Deena P. Francis ◽  
Kumudha Raimond
Keyword(s):  

2016 ◽  
Vol 45 (5) ◽  
pp. 1762-1792 ◽  
Author(s):  
Mina Ghashami ◽  
Edo Liberty ◽  
Jeff M. Phillips ◽  
David P. Woodruff

ReCALL ◽  
2000 ◽  
Vol 12 (2) ◽  
pp. 170-195 ◽  
Author(s):  
MIKE LEVY

This paper considers the problem of coherence and direction in CALL research. Rather than suggesting a top-down approach to setting goals for research, it argues for a much closer examination and a much stronger emphasis on existing CALL research work as a platform for directing and informing future CALL work. Based on a corpus of 47 CALL research articles published in books and journals in 1999, it sets out a framework for the description and analysis of CALL research as it is represented in the literature. Two major directions and three important, though less frequent, directions are described in detail, using examples from the corpus, and the implications for research in the future are considered. Particular emphasis will be placed on identifying the goals of CALL researchers and on clarifying the unique attributes of research in this field.


Sign in / Sign up

Export Citation Format

Share Document