tensor rank
Recently Published Documents


TOTAL DOCUMENTS

131
(FIVE YEARS 15)

H-INDEX

16
(FIVE YEARS 0)

2022 ◽  
Vol 4 ◽  
Author(s):  
Kaiqi Zhang ◽  
Cole Hawkins ◽  
Zheng Zhang

A major challenge in many machine learning tasks is that the model expressive power depends on model size. Low-rank tensor methods are an efficient tool for handling the curse of dimensionality in many large-scale machine learning models. The major challenges in training a tensor learning model include how to process the high-volume data, how to determine the tensor rank automatically, and how to estimate the uncertainty of the results. While existing tensor learning focuses on a specific task, this paper proposes a generic Bayesian framework that can be employed to solve a broad class of tensor learning problems such as tensor completion, tensor regression, and tensorized neural networks. We develop a low-rank tensor prior for automatic rank determination in nonlinear problems. Our method is implemented with both stochastic gradient Hamiltonian Monte Carlo (SGHMC) and Stein Variational Gradient Descent (SVGD). We compare the automatic rank determination and uncertainty quantification of these two solvers. We demonstrate that our proposed method can determine the tensor rank automatically and can quantify the uncertainty of the obtained results. We validate our framework on tensor completion tasks and tensorized neural network training tasks.



Author(s):  
Naoki Sasakura

In this paper, to understand space–time dynamics in the canonical tensor model of quantum gravity for the positive cosmological constant case, we analytically and numerically study the phase profile of its exact wave function in a coordinate representation, instead of the momentum representation analyzed so far. A saddle point analysis shows that Lie group symmetric space–times are strongly favored due to abundance of continuously existing saddle points, giving an emergent fluid picture. The phase profile suggests that spatial sizes grow in “time,” where sizes are measured by the tensor-geometry correspondence previously introduced using tensor rank decomposition. Monte Carlo simulations are also performed for a few small N cases by applying a re-weighting procedure to an oscillatory integral which expresses the wave function. The results agree well with the saddle point analysis, but the phase profile is subject to disturbances in a large space–time region, suggesting existence of light modes there and motivating future computations of primordial fluctuations from the perspective of canonical tensor model.



2021 ◽  
Vol 2090 (1) ◽  
pp. 012041
Author(s):  
Reed Nessler ◽  
Tuguldur Kh. Begzjav

Abstract The theory of nonlinear spectroscopy on randomly oriented molecules leads to the problem of averaging molecular quantities over random rotation. We solve this problem for arbitrary tensor rank by deriving a closed-form expression for the rotationally invariant tensor of averaged direction cosine products. From it, we obtain some useful new facts about this tensor. Our results serve to speed the inherently lengthy calculations of nonlinear optics.



2021 ◽  
Vol 13 (19) ◽  
pp. 3829
Author(s):  
Wenfeng Kong ◽  
Yangyang Song ◽  
Jing Liu

During the acquisition process, hyperspectral images (HSIs) are inevitably contaminated by mixed noise, which seriously affects the image quality. To improve the image quality, HSI denoising is a critical preprocessing step. In HSI denoising tasks, the method based on low-rank prior has achieved satisfying results. Among numerous denoising methods, the tensor nuclear norm (TNN), based on the tensor singular value decomposition (t-SVD), is employed to describe the low-rank prior approximately. Its calculation can be sped up by the fast Fourier transform (FFT). However, TNN is computed by the Fourier transform, which lacks the function of locating frequency. Besides, it only describes the low-rankness of the spectral correlations and ignores the spatial dimensions’ information. In this paper, to overcome the above deficiencies, we use the basis redundancy of the framelet and the low-rank characteristics of HSI in three modes. We propose the framelet-based tensor fibered rank as a new representation of the tensor rank, and the framelet-based three-modal tensor nuclear norm (F-3MTNN) as its convex relaxation. Meanwhile, the F-3MTNN is the new regularization of the denoising model. It can explore the low-rank characteristics of HSI along three modes that are more flexible and comprehensive. Moreover, we design an efficient algorithm via the alternating direction method of multipliers (ADMM). Finally, the numerical results of several experiments have shown the superior denoising performance of the proposed F-3MTNN model.



Universe ◽  
2021 ◽  
Vol 7 (8) ◽  
pp. 302
Author(s):  
Dennis Obster ◽  
Naoki Sasakura

Tensor rank decomposition is a useful tool for geometric interpretation of the tensors in the canonical tensor model (CTM) of quantum gravity. In order to understand the stability of this interpretation, it is important to be able to estimate how many tensor rank decompositions can approximate a given tensor. More precisely, finding an approximate symmetric tensor rank decomposition of a symmetric tensor Q with an error allowance Δ is to find vectors ϕi satisfying ∥Q−∑i=1Rϕi⊗ϕi⋯⊗ϕi∥2≤Δ. The volume of all such possible ϕi is an interesting quantity which measures the amount of possible decompositions for a tensor Q within an allowance. While it would be difficult to evaluate this quantity for each Q, we find an explicit formula for a similar quantity by integrating over all Q of unit norm. The expression as a function of Δ is given by the product of a hypergeometric function and a power function. By combining new numerical analysis and previous results, we conjecture a formula for the critical rank, yielding an estimate for the spacetime degrees of freedom of the CTM. We also extend the formula to generic decompositions of non-symmetric tensors in order to make our results more broadly applicable. Interestingly, the derivation depends on the existence (convergence) of the partition function of a matrix model which previously appeared in the context of the CTM.



2021 ◽  
Vol 620 ◽  
pp. 37-60
Author(s):  
Y.G. Liang ◽  
Sergio Da Silva ◽  
Yang Zhang
Keyword(s):  


2021 ◽  
Author(s):  
Hao Kong ◽  
Canyi Lu ◽  
Zhouchen Lin
Keyword(s):  


Author(s):  
Michel Chipot ◽  
Wolfgang Hackbusch ◽  
Stefan Sauter ◽  
Alexander Veit

AbstractIn this paper, we consider the Poisson equation on a “long” domain which is the Cartesian product of a one-dimensional long interval with a (d − 1)-dimensional domain. The right-hand side is assumed to have a rank-1 tensor structure. We will present and compare methods to construct approximations of the solution which have tensor structure and the computational effort is governed by only solving elliptic problems on lower-dimensional domains. A zero-th order tensor approximation is derived by using tools from asymptotic analysis (method 1). The resulting approximation is an elementary tensor and, hence has a fixed error which turns out to be very close to the best possible approximation of zero-th order. This approximation can be used as a starting guess for the derivation of higher-order tensor approximations by a greedy-type method (method 2). Numerical experiments show that this method is converging towards the exact solution. Method 3 is based on the derivation of a tensor approximation via exponential sums applied to discretized differential operators and their inverses. It can be proved that this method converges exponentially with respect to the tensor rank. We present numerical experiments which compare the performance and sensitivity of these three methods.



2021 ◽  
Vol 37 ◽  
pp. 425-433
Author(s):  
Siddharth Krishna ◽  
Visu Makam

The tensor rank and border rank of the $3 \times 3$ determinant tensor are known to be $5$ if the characteristic is not two. In characteristic two, the existing proofs of both the upper and lower bounds fail. In this paper, we show that the tensor rank remains $5$ for fields of characteristic two as well.



Sign in / Sign up

Export Citation Format

Share Document