tensor data
Recently Published Documents


TOTAL DOCUMENTS

183
(FIVE YEARS 63)

H-INDEX

23
(FIVE YEARS 3)

2023 ◽  
Author(s):  
Ning Wang ◽  
Xin Zhang ◽  
Bing Li
Keyword(s):  

Author(s):  
Cheng Qian ◽  
Nikos Kargas ◽  
Cao Xiao ◽  
Lucas Glass ◽  
Nicholas Sidiropoulos ◽  
...  

Real-world spatio-temporal data is often incomplete or inaccurate due to various data loading delays. For example, a location-disease-time tensor of case counts can have multiple delayed updates of recent temporal slices for some locations or diseases. Recovering such missing or noisy (under-reported) elements of the input tensor can be viewed as a generalized tensor completion problem. Existing tensor completion methods usually assume that i) missing elements are randomly distributed and ii) noise for each tensor element is i.i.d. zero-mean. Both assumptions can be violated for spatio-temporal tensor data. We often observe multiple versions of the input tensor with different under-reporting noise levels. The amount of noise can be time- or location-dependent as more updates are progressively introduced to the tensor. We model such dynamic data as a multi-version tensor with an extra tensor mode capturing the data updates. We propose a low-rank tensor model to predict the updates over time. We demonstrate that our method can accurately predict the ground-truth values of many real-world tensors. We obtain up to 27.2% lower root mean-squared-error compared to the best baseline method. Finally, we extend our method to track the tensor data over time, leading to significant computational savings.


2021 ◽  
Vol 11 (15) ◽  
pp. 7003
Author(s):  
Safa Elsheikh ◽  
Andrew Fish ◽  
Diwei Zhou

A diffusion tensor models the covariance of the Brownian motion of water at a voxel and is required to be symmetric and positive semi-definite. Therefore, image processing approaches, designed for linear entities, are not effective for diffusion tensor data manipulation, and the existence of artefacts in diffusion tensor imaging acquisition makes diffusion tensor data segmentation even more challenging. In this study, we develop a spatial fuzzy c-means clustering method for diffusion tensor data that effectively segments diffusion tensor images by accounting for the noise, partial voluming, magnetic field inhomogeneity, and other imaging artefacts. To retain the symmetry and positive semi-definiteness of diffusion tensors, the log and root Euclidean metrics are used to estimate the mean diffusion tensor for each cluster. The method exploits spatial contextual information and provides uncertainty information in segmentation decisions by calculating the membership values for assigning a diffusion tensor at one voxel to different clusters. A regularisation model that allows the user to integrate their prior knowledge into the segmentation scheme or to highlight and segment local structures is also proposed. Experiments on simulated images and real brain datasets from healthy and Spinocerebellar ataxia 2 subjects showed that the new method was more effective than conventional segmentation methods.


2021 ◽  
pp. 1-18
Author(s):  
Ting Gao ◽  
Zhengming Ma ◽  
Wenxu Gao ◽  
Shuyu Liu

There are three contributions in this paper. (1) A tensor version of LLE (short for Local Linear Embedding algorithm) is deduced and presented. LLE is the most famous manifold learning algorithm. Since its proposal, various improvements to LLE have kept emerging without interruption. However, all these achievements are only suitable for vector data, not tensor data. The proposed tensor LLE can also be used a bridge for various improvements to LLE to transfer from vector data to tensor data. (2) A framework of tensor dimensionality reduction based on tensor mode product is proposed, in which the mode matrices can be determined according to specific criteria. (3) A novel dimensionality reduction algorithm for tensor data based on LLE and mode product (LLEMP-TDR) is proposed, in which LLE is used as a criterion to determine the mode matrices. Benefiting from local LLE and global mode product, the proposed LLEMP-TDR can preserve both local and global features of high-dimensional tenser data during dimensionality reduction. The experimental results on data clustering and classification tasks demonstrate that our method performs better than 5 other related algorithms published recently in top academic journals.


Author(s):  
YuNing Qiu ◽  
GuoXu Zhou ◽  
XinQi Chen ◽  
DongPing Zhang ◽  
XinHai Zhao ◽  
...  

2021 ◽  
pp. 1-28
Author(s):  
Kohei Yoshikawa ◽  
Shuichi Kawano

We consider the problem of extracting a common structure from multiple tensor data sets. For this purpose, we propose multilinear common component analysis (MCCA) based on Kronecker products of mode-wise covariance matrices. MCCA constructs a common basis represented by linear combinations of the original variables that lose little information of the multiple tensor data sets. We also develop an estimation algorithm for MCCA that guarantees mode-wise global convergence. Numerical studies are conducted to show the effectiveness of MCCA.


2021 ◽  
Vol 47 (3) ◽  
pp. 1-37
Author(s):  
Srinivas Eswar ◽  
Koby Hayashi ◽  
Grey Ballard ◽  
Ramakrishnan Kannan ◽  
Michael A. Matheson ◽  
...  

We consider the problem of low-rank approximation of massive dense nonnegative tensor data, for example, to discover latent patterns in video and imaging applications. As the size of data sets grows, single workstations are hitting bottlenecks in both computation time and available memory. We propose a distributed-memory parallel computing solution to handle massive data sets, loading the input data across the memories of multiple nodes, and performing efficient and scalable parallel algorithms to compute the low-rank approximation. We present a software package called Parallel Low-rank Approximation with Nonnegativity Constraints, which implements our solution and allows for extension in terms of data (dense or sparse, matrices or tensors of any order), algorithm (e.g., from multiplicative updating techniques to alternating direction method of multipliers), and architecture (we exploit GPUs to accelerate the computation in this work). We describe our parallel distributions and algorithms, which are careful to avoid unnecessary communication and computation, show how to extend the software to include new algorithms and/or constraints, and report efficiency and scalability results for both synthetic and real-world data sets.


Sign in / Sign up

Export Citation Format

Share Document