scholarly journals Quartic First-Order Methods for Low-Rank Minimization

Author(s):  
Radu-Alexandru Dragomir ◽  
Alexandre d’Aspremont ◽  
Jérôme Bolte
2019 ◽  
Vol 21 (3) ◽  
pp. 1195-1219
Author(s):  
Shiqian Ma ◽  
Fei Wang ◽  
Linchuan Wei ◽  
Henry Wolkowicz

AbstractWe introduce a novel approach for robust principal component analysis (RPCA) for a partially observed data matrix. The aim is to recover the data matrix as a sum of a low-rank matrix and a sparse matrix so as to eliminate erratic noise (outliers). This problem is known to be NP-hard in general. A classical approach to solving RPCA is to consider convex relaxations. One such heuristic involves the minimization of the (weighted) sum of a nuclear norm part, that promotes a low-rank component, with an $$\ell _1$$ ℓ 1 norm part, to promote a sparse component. This results in a well-structured convex problem that can be efficiently solved by modern first-order methods. However, first-order methods often yield low accuracy solutions. Moreover, the heuristic of using a norm consisting of a weighted sum of norms may lose some of the advantages that each norm had when used separately. In this paper, we propose a novel nonconvex and nonsmooth reformulation of the original NP-hard RPCA model. The new model adds a redundant semidefinite cone constraint and solves small subproblems using a PALM algorithm. Each subproblem results in an exposing vector for a facial reduction technique that is able to reduce the size significantly. This makes the problem amenable to efficient algorithms in order to obtain high-level accuracy. We include numerical results that confirm the efficacy of our approach.


Author(s):  
Jacob Stegenga

Medical scientists employ ‘quality assessment tools’ to assess evidence from medical research, especially from randomized trials. These tools are designed to take into account methodological details of studies, including randomization, subject allocation concealment, and other features of studies deemed relevant to minimizing bias. There are dozens of such tools available. They differ widely from each other, and empirical studies show that they have low inter-rater reliability and low inter-tool reliability. This is an instance of a more general problem called here the underdetermination of evidential significance. Disagreements about the quality of evidence can be due to different—but in principle equally good—weightings of the methodological features that constitute quality assessment tools. Thus, the malleability of empirical research in medicine is deep: in addition to the malleability of first-order empirical methods, such as randomized trials, there is malleability in the tools used to evaluate first-order methods.


2020 ◽  
Vol 108 (11) ◽  
pp. 1869-1889
Author(s):  
Ran Xin ◽  
Shi Pu ◽  
Angelia Nedic ◽  
Usman A. Khan

2013 ◽  
Vol 146 (1-2) ◽  
pp. 37-75 ◽  
Author(s):  
Olivier Devolder ◽  
François Glineur ◽  
Yurii Nesterov

Author(s):  
Vasily I. Repnikov ◽  
Boris V. Faleichik ◽  
Andrew V. Moisa

In this work we present explicit Adams-type multi-step methods with extended stability intervals, which are analogous to the stabilised Chebyshev Runge – Kutta methods. It is proved that for any k ≥ 1 there exists an explicit k-step Adams-type method of order one with stability interval of length 2k. The first order methods have remarkably simple expressions for their coefficients and error constant. A damped modification of these methods is derived. In the general case, to construct a k-step method of order p it is necessary to solve a constrained optimisation problem in which the objective function and p constraints are second degree polynomials in k variables. We calculate higher-order methods up to order six numerically and perform some numerical experiments to confirm the accuracy and stability of the methods.


2020 ◽  
Vol 34 (05) ◽  
pp. 8204-8211
Author(s):  
Jian Li ◽  
Xing Wang ◽  
Baosong Yang ◽  
Shuming Shi ◽  
Michael R. Lyu ◽  
...  

Recent NLP studies reveal that substantial linguistic information can be attributed to single neurons, i.e., individual dimensions of the representation vectors. We hypothesize that modeling strong interactions among neurons helps to better capture complex information by composing the linguistic properties embedded in individual neurons. Starting from this intuition, we propose a novel approach to compose representations learned by different components in neural machine translation (e.g., multi-layer networks or multi-head attention), based on modeling strong interactions among neurons in the representation vectors. Specifically, we leverage bilinear pooling to model pairwise multiplicative interactions among individual neurons, and a low-rank approximation to make the model computationally feasible. We further propose extended bilinear pooling to incorporate first-order representations. Experiments on WMT14 English⇒German and English⇒French translation tasks show that our model consistently improves performances over the SOTA Transformer baseline. Further analyses demonstrate that our approach indeed captures more syntactic and semantic information as expected.


Author(s):  
Pavel Dvurechensky ◽  
Shimrit Shtern ◽  
Mathias Staudigl

Sign in / Sign up

Export Citation Format

Share Document