Iterative algorithm for the symmetric and nonnegative tensor completion problem

2018 ◽  
Vol 67 (8) ◽  
pp. 1579-1595
Author(s):  
Xuefeng Duan ◽  
Jianheng Chen ◽  
Chunmei Li ◽  
Qingwen Wang
2021 ◽  
Author(s):  
Vasanth S. Murali ◽  
Didem Ağaç Çobanoğlu ◽  
Michael Hsieh ◽  
Meyer Zinn ◽  
Venkat S. Malladi ◽  
...  

AbstractThe heterogeneity of cancer necessitates developing a multitude of targeted therapies. We propose the view that cancer drug discovery is a low rank tensor completion problem. We implement this vision by using heterogeneous public data to construct a tensor of drug-target-disease associations. We show the validity of this approach computationally by simulations, and experimentally by testing drug candidates. Specifically, we show that a novel drug candidate, SU11652, controls melanoma tumor growth, including BRAFWT melanoma. Independently, we show that another molecule, TC-E 5008, controls tumor proliferation on ex vivo ER+ human breast cancer. Most importantly, we identify these chemicals with only a few computationally selected experiments as opposed to brute-force screens. The efficiency of our approach enables use of ex vivo human tumor assays as a primary screening tool. We provide a web server, the Cancer Vulnerability Explorer (accessible at https://cavu.biohpc.swmed.edu), to facilitate the use of our methodology.


2021 ◽  
Vol 7 (7) ◽  
pp. 110
Author(s):  
Zehan Chao ◽  
Longxiu Huang ◽  
Deanna Needell

Matrix completion, the problem of completing missing entries in a data matrix with low-dimensional structure (such as rank), has seen many fruitful approaches and analyses. Tensor completion is the tensor analog that attempts to impute missing tensor entries from similar low-rank type assumptions. In this paper, we study the tensor completion problem when the sampling pattern is deterministic and possibly non-uniform. We first propose an efficient weighted Higher Order Singular Value Decomposition (HOSVD) algorithm for the recovery of the underlying low-rank tensor from noisy observations and then derive the error bounds under a properly weighted metric. Additionally, the efficiency and accuracy of our algorithm are both tested using synthetic and real datasets in numerical simulations.


2021 ◽  
Author(s):  
Changxiao Cai ◽  
Gen Li ◽  
H. Vincent Poor ◽  
Yuxin Chen

This paper investigates a problem of broad practical interest, namely, the reconstruction of a large-dimensional low-rank tensor from highly incomplete and randomly corrupted observations of its entries. Although a number of papers have been dedicated to this tensor completion problem, prior algorithms either are computationally too expensive for large-scale applications or come with suboptimal statistical performance. Motivated by this, we propose a fast two-stage nonconvex algorithm—a gradient method following a rough initialization—that achieves the best of both worlds: optimal statistical accuracy and computational efficiency. Specifically, the proposed algorithm provably completes the tensor and retrieves all low-rank factors within nearly linear time, while at the same time enjoying near-optimal statistical guarantees (i.e., minimal sample complexity and optimal estimation accuracy). The insights conveyed through our analysis of nonconvex optimization might have implications for a broader family of tensor reconstruction problems beyond tensor completion.


Author(s):  
Ioanna Siaminou ◽  
Ioannis Marios Papagiannakos ◽  
Christos Kolomvakis ◽  
Athanasios P. Liavas

IEEE Access ◽  
2019 ◽  
Vol 7 ◽  
pp. 95903-95914 ◽  
Author(s):  
Bilian Chen ◽  
Ting Sun ◽  
Zhehao Zhou ◽  
Yifeng Zeng ◽  
Langcai Cao

2020 ◽  
Vol 34 (04) ◽  
pp. 4420-4427
Author(s):  
Nikos Kargas ◽  
Nicholas D. Sidiropoulos

Function approximation from input and output data pairs constitutes a fundamental problem in supervised learning. Deep neural networks are currently the most popular method for learning to mimic the input-output relationship of a general nonlinear system, as they have proven to be very effective in approximating complex highly nonlinear functions. In this work, we show that identifying a general nonlinear function y = ƒ(x1,…,xN) from input-output examples can be formulated as a tensor completion problem and under certain conditions provably correct nonlinear system identification is possible. Specifically, we model the interactions between the N input variables and the scalar output of a system by a single N-way tensor, and setup a weighted low-rank tensor completion problem with smoothness regularization which we tackle using a block coordinate descent algorithm. We extend our method to the multi-output setting and the case of partially observed data, which cannot be readily handled by neural networks. Finally, we demonstrate the effectiveness of the approach using several regression tasks including some standard benchmarks and a challenging student grade prediction task.


An iterative criterion for the asymptotic steadiness of a linear descriptor system is considered. The criterion is based on an iterative algorithm for computing a generalized matrix sign-function. As an example, the problem of analyzing the asymptotic steadiness of a large descriptor system is given. Keywords linear descriptor system; steadiness criterion; matrix sign-function; search algorithm


Sign in / Sign up

Export Citation Format

Share Document