rank tensor
Recently Published Documents


TOTAL DOCUMENTS

503
(FIVE YEARS 251)

H-INDEX

33
(FIVE YEARS 9)

2022 ◽  
Vol 27 (2) ◽  
pp. 1-23
Author(s):  
Xiao Shi ◽  
Hao Yan ◽  
Qiancun Huang ◽  
Chengzhen Xuan ◽  
Lei He ◽  
...  

“Curse of dimensionality” has become the major challenge for existing high-sigma yield analysis methods. In this article, we develop a meta-model using Low-Rank Tensor Approximation (LRTA) to substitute expensive SPICE simulation. The polynomial degree of our LRTA model grows linearly with the circuit dimension. This makes it especially promising for high-dimensional circuit problems. Our LRTA meta-model is solved efficiently with a robust greedy algorithm and calibrated iteratively with a bootstrap-assisted adaptive sampling method. We also develop a novel global sensitivity analysis approach to generate a reduced LRTA meta-model which is more compact. It further accelerates the procedure of model calibration and yield estimation. Experiments on memory and analog circuits validate that the proposed LRTA method outperforms other state-of-the-art approaches in terms of accuracy and efficiency.


2022 ◽  
Author(s):  
Yujiao Zhao ◽  
Zheyuan Yi ◽  
Yilong Liu ◽  
Fei Chen ◽  
Linfang Xiao ◽  
...  

2022 ◽  
Vol 4 ◽  
Author(s):  
Kaiqi Zhang ◽  
Cole Hawkins ◽  
Zheng Zhang

A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks.


2022 ◽  
Vol 190 ◽  
pp. 108339
Author(s):  
Jingfei He ◽  
Xunan Zheng ◽  
Peng Gao ◽  
Yatong Zhou

2021 ◽  
Author(s):  
Mahsa Mozaffari ◽  
Panos P. Markopoulos

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>


2021 ◽  
Author(s):  
Mahsa Mozaffari ◽  
Panos P. Markopoulos

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>


2021 ◽  
Vol 30 (06) ◽  
Author(s):  
Wenjin Qin ◽  
Hailin Wang ◽  
Feng Zhang ◽  
Mingwei Dai ◽  
Jianjun Wang
Keyword(s):  
Low Rank ◽  

Sign in / Sign up

Export Citation Format

Share Document