tensor algebra
Recently Published Documents


TOTAL DOCUMENTS

204
(FIVE YEARS 40)

H-INDEX

15
(FIVE YEARS 2)

Symmetry ◽  
2022 ◽  
Vol 14 (1) ◽  
pp. 70
Author(s):  
Florio M. Ciaglia ◽  
Fabio Di Cosmo ◽  
Alberto Ibort ◽  
Giuseppe Marmo ◽  
Luca Schiavone ◽  
...  

As the space of solutions of the first-order Hamiltonian field theory has a presymplectic structure, we describe a class of conserved charges associated with the momentum map, determined by a symmetry group of transformations. A gauge theory is dealt with by using a symplectic regularization based on an application of Gotay’s coisotropic embedding theorem. An analysis of electrodynamics and of the Klein–Gordon theory illustrate the main results of the theory as well as the emergence of the energy–momentum tensor algebra of conserved currents.


2021 ◽  
Author(s):  
Liancheng Jia ◽  
Zizhang Luo ◽  
Liqiang Lu ◽  
Yun Liang
Keyword(s):  

2021 ◽  
pp. 144-159
Author(s):  
Andrew M. Steane

Tensors and tensor algebra are presented. The concept of a tensor is defined in two ways: as something which yields a scalar from a set of vectors, and as something whose components transform a given way. The meaning and use of these definitions is expounded carefully, along with examples. The action of the metric and its inverse (index lowering and raising) is derived. The relation between geodesic coordinates and Christoffel symbols is obtained. The difference between partial differentiation and covariant differentiation is explained at length. The tensor density and Hodge dual are briefly introduced.


2021 ◽  
Author(s):  
Ruiqin Tian ◽  
Luanzheng Guo ◽  
Jiajia Li ◽  
Bin Ren ◽  
Gokcen Kestor

2021 ◽  
Author(s):  
Sergio Sanchez-Ramirez ◽  
Javier Conejero ◽  
Francesc Lordan ◽  
Anna Queralt ◽  
Toni Cortes ◽  
...  

2021 ◽  
Vol 2070 (1) ◽  
pp. 012161
Author(s):  
Arthesh Basak ◽  
Amirtham Rajagopal ◽  
Umesh Basappa

Abstract Analysis of tensors in oblique Cartesian coordinate systems always requires the definition of a set of orthogonal covariant basis vectors called the Reciprocal basis. This increases the complexity of the analysis and hence makes the method cumbersome. In this work a novel method is presented to effectively carry out the various transformations of tensors to and between oblique coordinate system/s without the need to create the covariant reciprocal basis. This will simplify the procedure of transformations involving problems where tensors are required to be defined in the oblique coordinate system. This work also demonstrates how the analysis of contravariant tensors can be applied to hyperelasticity. Continuum material and damage models can integrate this approach to model anisotropy and non linearity using a much simpler approach. The accuracy of the models was illustrated by matching the predictions to experimental results. A finite element analysis of material and damage model based on contravariant tensors was also carried out on a simple geometry with a re-entrant corner.


2021 ◽  
Vol 5 (OOPSLA) ◽  
pp. 1-29
Author(s):  
Rawn Henry ◽  
Olivia Hsu ◽  
Rohan Yadav ◽  
Stephen Chou ◽  
Kunle Olukotun ◽  
...  

This paper shows how to compile sparse array programming languages. A sparse array programming language is an array programming language that supports element-wise application, reduction, and broadcasting of arbitrary functions over dense and sparse arrays with any fill value. Such a language has great expressive power and can express sparse and dense linear and tensor algebra, functions over images, exclusion and inclusion filters, and even graph algorithms. Our compiler strategy generalizes prior work in the literature on sparse tensor algebra compilation to support any function applied to sparse arrays, instead of only addition and multiplication. To achieve this, we generalize the notion of sparse iteration spaces beyond intersections and unions. These iteration spaces are automatically derived by considering how algebraic properties annotated onto functions interact with the fill values of the arrays. We then show how to compile these iteration spaces to efficient code. When compared with two widely-used Python sparse array packages, our evaluation shows that we generate built-in sparse array library features with a performance of 1.4× to 53.7× when measured against PyData/Sparse for user-defined functions and between 0.98× and 5.53× when measured against SciPy/Sparse for sparse array slicing. Our technique outperforms PyData/Sparse by 6.58× to 70.3×, and (where applicable) performs between 0.96× and 28.9× that of a dense NumPy implementation, on end-to-end sparse array applications. We also implement graph linear algebra kernels in our system with a performance of between 0.56× and 3.50× compared to that of the hand-optimized SuiteSparse:GraphBLAS library.


Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5076
Author(s):  
Javier Martinez-Roman ◽  
Ruben Puche-Panadero ◽  
Angel Sapena-Bano ◽  
Carla Terron-Santiago ◽  
Jordi Burriel-Valencia ◽  
...  

Induction machines (IMs) are one of the main sources of mechanical power in many industrial processes, especially squirrel cage IMs (SCIMs), due to their robustness and reliability. Their sudden stoppage due to undetected faults may cause costly production breakdowns. One of the most frequent types of faults are cage faults (bar and end ring segment breakages), especially in motors that directly drive high-inertia loads (such as fans), in motors with frequent starts and stops, and in case of poorly manufactured cage windings. A continuous monitoring of IMs is needed to reduce this risk, integrated in plant-wide condition based maintenance (CBM) systems. Diverse diagnostic techniques have been proposed in the technical literature, either data-based, detecting fault-characteristic perturbations in the data collected from the IM, and model-based, observing the differences between the data collected from the actual IM and from its digital twin model. In both cases, fast and accurate IM models are needed to develop and optimize the fault diagnosis techniques. On the one hand, the finite elements approach can provide highly accurate models, but its computational cost and processing requirements are very high to be used in on-line fault diagnostic systems. On the other hand, analytical models can be much faster, but they can be very complex in case of highly asymmetrical machines, such as IMs with multiple cage faults. In this work, a new method is proposed for the analytical modelling of IMs with asymmetrical cage windings using a tensor based approach, which greatly reduces this complexity by applying routine tensor algebra to obtain the parameters of the faulty IM model from the healthy one. This winding tensor approach is explained theoretically and validated with the diagnosis of a commercial IM with multiple cage faults.


2021 ◽  
Vol 118 (28) ◽  
pp. e2015851118
Author(s):  
Misha E. Kilmer ◽  
Lior Horesh ◽  
Haim Avron ◽  
Elizabeth Newman

With the advent of machine learning and its overarching pervasiveness it is imperative to devise ways to represent large datasets efficiently while distilling intrinsic features necessary for subsequent analysis. The primary workhorse used in data dimensionality reduction and feature extraction has been the matrix singular value decomposition (SVD), which presupposes that data have been arranged in matrix format. A primary goal in this study is to show that high-dimensional datasets are more compressible when treated as tensors (i.e., multiway arrays) and compressed via tensor-SVDs under the tensor-tensor product constructs and its generalizations. We begin by proving Eckart–Young optimality results for families of tensor-SVDs under two different truncation strategies. Since such optimality properties can be proven in both matrix and tensor-based algebras, a fundamental question arises: Does the tensor construct subsume the matrix construct in terms of representation efficiency? The answer is positive, as proven by showing that a tensor-tensor representation of an equal dimensional spanning space can be superior to its matrix counterpart. We then use these optimality results to investigate how the compressed representation provided by the truncated tensor SVD is related both theoretically and empirically to its two closest tensor-based analogs, the truncated high-order SVD and the truncated tensor-train SVD.


Sign in / Sign up

Export Citation Format

Share Document