# tensor approximationRecently Published Documents

140
(FIVE YEARS 71)

## H-INDEX

18
(FIVE YEARS 5)

2022 ◽
Vol 27 (2) ◽
pp. 1-23
Author(s):
Xiao Shi ◽
Hao Yan ◽
Qiancun Huang ◽
Chengzhen Xuan ◽
Lei He ◽
...
Keyword(s):

“Curse of dimensionality” has become the major challenge for existing high-sigma yield analysis methods. In this article, we develop a meta-model using Low-Rank Tensor Approximation (LRTA) to substitute expensive SPICE simulation. The polynomial degree of our LRTA model grows linearly with the circuit dimension. This makes it especially promising for high-dimensional circuit problems. Our LRTA meta-model is solved efficiently with a robust greedy algorithm and calibrated iteratively with a bootstrap-assisted adaptive sampling method. We also develop a novel global sensitivity analysis approach to generate a reduced LRTA meta-model which is more compact. It further accelerates the procedure of model calibration and yield estimation. Experiments on memory and analog circuits validate that the proposed LRTA method outperforms other state-of-the-art approaches in terms of accuracy and efficiency.

2022 ◽
Vol 11 (2) ◽
pp. 214
Author(s):
Kyuahn Kwon ◽
Jaeyong Chung
Keyword(s):

Large-scale neural networks have attracted much attention for surprising results in various cognitive tasks such as object detection and image classification. However, the large number of weight parameters in the complex networks can be problematic when the models are deployed to embedded systems. In addition, the problems are exacerbated in emerging neuromorphic computers, where each weight parameter is stored within a synapse, the primary computational resource of the bio-inspired computers. We describe an effective way of reducing the parameters by a recursive tensor factorization method. Applying the singular value decomposition in a recursive manner decomposes a tensor that represents the weight parameters. Then, the tensor is approximated by algorithms minimizing the approximation error and the number of parameters. This process factorizes a given network, yielding a deeper, less dense, and weight-shared network with good initial weights, which can be fine-tuned by gradient descent.

2022 ◽
pp. 1-1
Author(s):
Yanhong Yang ◽
Yuan Feng ◽
Jianhua Zhang ◽
Shengyong Chen
Keyword(s):

2021 ◽
Author(s):
Mahsa Mozaffari ◽
Panos P. Markopoulos
Keyword(s):

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>

2021 ◽
Author(s):
Mahsa Mozaffari ◽
Panos P. Markopoulos
Keyword(s):

<p>In this work, we propose a new formulation for low-rank tensor approximation, with tunable outlier-robustness, and present a unified algorithmic solution framework. This formulation relies on a new generalized robust loss function (Barron loss), which encompasses several well-known loss-functions with variable outlier resistance. The robustness of the proposed framework is corroborated by the presented numerical studies on synthetic and real data.</p>

Author(s):
Rima Khouja ◽
Houssam Khalil ◽
Bernard Mourrain
Keyword(s):

2021 ◽
Vol 59 (1) ◽
Author(s):
Carlo Marcati ◽
Maxim Rakhuba ◽
Johan E. M. Ulander
Keyword(s):

AbstractWe derive rank bounds on the quantized tensor train (QTT) compressed approximation of singularly perturbed reaction diffusion boundary value problems in one dimension. Specifically, we show that, independently of the scale of the singular perturbation parameter, a numerical solution with accuracy $$0<\varepsilon <1$$ 0 < ε < 1 can be represented in the QTT format with a number of parameters that depends only polylogarithmically on $$\varepsilon$$ ε . In other words, QTT-compressed solutions converge exponentially fast to the exact solution, with respect to a root of the number of parameters. We also verify the rank bound estimates numerically and overcome known stability issues of the QTT-based solution of partial differential equations (PDEs) by adapting a preconditioning strategy to obtain stable schemes at all scales. We find, therefore, that the QTT-based strategy is a rapidly converging algorithm for the solution of singularly perturbed PDEs, which does not require prior knowledge on the scale of the singular perturbation and on the shape of the boundary layers.

Author(s):
Bingni Guo ◽
Jiawang Nie ◽
Zi Yang
Keyword(s):

AbstractThis paper studies how to learn parameters in diagonal Gaussian mixture models. The problem can be formulated as computing incomplete symmetric tensor decompositions. We use generating polynomials to compute incomplete symmetric tensor decompositions and approximations. Then the tensor approximation method is used to learn diagonal Gaussian mixture models. We also do the stability analysis. When the first and third order moments are sufficiently accurate, we show that the obtained parameters for the Gaussian mixture models are also highly accurate. Numerical experiments are also provided.

Author(s):
Rafael Ballester-Ripoll
Keyword(s):

2021 ◽
Vol 408 ◽
pp. 126342
Author(s):
Jie Lin ◽
Ting-Zhu Huang ◽
Xi-Le Zhao ◽
Tian-Hui Ma ◽
Tai-Xiang Jiang ◽
...
Keyword(s):