Identity Matrix
Recently Published Documents





2022 ◽  
Jingni Xiao

Abstract We consider corner scattering for the operator ∇ · γ(x)∇ + k2ρ(x) in R2, with γ a positive definite symmetric matrix and ρ a positive scalar function. A corner is referred to one that is on the boundary of the (compact) support of γ(x) − I or ρ(x) − 1, where I stands for the identity matrix. We assume that γ is a scalar function in a small neighborhood of the corner. We show that any admissible incident field will be scattered by such corners, which are allowed to be concave. Moreover, we provide a brief discussion on the existence of non-scattering waves when γ − I has a jump across the corner. In order to prove the results, we construct a new type of complex geometric optics (CGO) solutions.

2021 ◽  
Vol 37 ◽  
pp. 734-746
Wai Leong Chooi ◽  
Yean Nee Tan

Let $n\geq 2$ and $1<k\leq n$ be integers. Let $S_n(\mathbb{F})$ be the linear space of $n\times n$ symmetric matrices over a field $\mathbb{F}$ of characteristic not two. In this note, we prove that an additive map $\psi:S_n(\mathbb{F})\rightarrow S_n(\mathbb{F})$ satisfies $\psi(A)A=A\psi(A)$ for all rank $k$ matrices $A\in S_n(\mathbb{F})$ if and only if there exists a scalar $\lambda\in \mathbb{F}$ and an additive map $\mu:S_n(\mathbb{F})\rightarrow \mathbb{F}$ such that\[\psi(A)=\lambda A+\mu(A)I_n,\]for all $A\in S_n(\mathbb{F})$, where $I_n$ is the identity matrix. Examples showing the indispensability of assumptions on the integer $k>1$ and the underlying field $\mathbb{F}$ of characteristic not two are included.

Tobias Boege

AbstractThe gaussoid axioms are conditional independence inference rules which characterize regular Gaussian CI structures over a three-element ground set. It is known that no finite set of inference rules completely describes regular Gaussian CI as the ground set grows. In this article we show that the gaussoid axioms logically imply every inference rule of at most two antecedents which is valid for regular Gaussians over any ground set. The proof is accomplished by exhibiting for each inclusion-minimal gaussoid extension of at most two CI statements a regular Gaussian realization. Moreover we prove that all those gaussoids have rational positive-definite realizations inside every ε-ball around the identity matrix. For the proof we introduce the concept of algebraic Gaussians over arbitrary fields and of positive Gaussians over ordered fields and obtain the same two-antecedental completeness of the gaussoid axioms for algebraic and positive Gaussians over all fields of characteristic zero as a byproduct.

2021 ◽  
Vol 2021 ◽  
pp. 1-13
Jorge Luis Arroyo Neri ◽  
Armando Sánchez-Nungaray ◽  
Mauricio Hernández Marroquin ◽  
Raquiel R. López-Martínez

We introduce the so-called extended Lagrangian symbols, and we prove that the C ∗ -algebra generated by Toeplitz operators with these kind of symbols acting on the homogeneously poly-Fock space of the complex space ℂ n is isomorphic and isometric to the C ∗ -algebra of matrix-valued functions on a certain compactification of ℝ n obtained by adding a sphere at the infinity; moreover, the matrix values at the infinity points are equal to some scalar multiples of the identity matrix.

Gioconda Moscariello ◽  
Giulio Pascale

AbstractWe consider linear elliptic systems whose prototype is $$\begin{aligned} div \, \Lambda \left[ \,\exp (-|x|) - \log |x|\,\right] I \, Du = div \, F + g \text { in}\, B. \end{aligned}$$ d i v Λ exp ( - | x | ) - log | x | I D u = d i v F + g in B . Here B denotes the unit ball of $$\mathbb {R}^n$$ R n , for $$n > 2$$ n > 2 , centered in the origin, I is the identity matrix, F is a matrix in $$W^{1, 2}(B, \mathbb {R}^{n \times n})$$ W 1 , 2 ( B , R n × n ) , g is a vector in $$L^2(B, \mathbb {R}^n)$$ L 2 ( B , R n ) and $$\Lambda $$ Λ is a positive constant. Our result reads that the gradient of the solution $$u \in W_0^{1, 2}(B, \mathbb {R}^n)$$ u ∈ W 0 1 , 2 ( B , R n ) to Dirichlet problem for system (0.1) is weakly differentiable provided the constant $$\Lambda $$ Λ is not large enough.

2021 ◽  
pp. 183-186
Timothy E. Essington

The chapter “Mathematics Refresher” provides a brief reminder of operations with logarithms, matrices, and calculus, for student reference. It starts off by reviewing the differences between regular logarithms and natural logarithms and provides some examples of common operations with logarithms. It then introduces derivatives and integrals (although it is never necessary to compute an integral in this book, it is still useful to know what an integral is) and explains the sum rule, the product rule, the quotient rule, and the chain rule. Next, it provides a brief overview of matrices and matrix operations, including matrix dimensions, and addition and multiplication of matrices. It concludes with a discussion of the identity matrix.

Mathematics ◽  
2021 ◽  
Vol 9 (18) ◽  
pp. 2226
Arif Mandangan ◽  
Hailiza Kamarulhaili ◽  
Muhammad Asyraf Asbullah

Matrix inversion is one of the most significant operations on a matrix. For any non-singular matrix A∈Zn×n, the inverse of this matrix may contain countless numbers of non-integer entries. These entries could be endless floating-point numbers. Storing, transmitting, or operating such an inverse could be cumbersome, especially when the size n is large. The only square integer matrix that is guaranteed to have an integer matrix as its inverse is a unimodular matrix U∈Zn×n. With the property that det(U)=±1, then U−1∈Zn×n is guaranteed such that UU−1=I, where I∈Zn×n is an identity matrix. In this paper, we propose a new integer matrix G˜∈Zn×n, which is referred to as an almost-unimodular matrix. With det(G˜)≠±1, the inverse of this matrix, G˜−1∈Rn×n, is proven to consist of only a single non-integer entry. The almost-unimodular matrix could be useful in various areas, such as lattice-based cryptography, computer graphics, lattice-based computational problems, or any area where the inversion of a large integer matrix is necessary, especially when the determinant of the matrix is required not to equal ±1. Therefore, the almost-unimodular matrix could be an alternative to the unimodular matrix.

2021 ◽  
Vol 2021 ◽  
pp. 1-22
Sourav Shil ◽  
Hemant Kumar Nashine

In this work, the following system of nonlinear matrix equations is considered, X 1 + A ∗ X 1 − 1 A + B ∗ X 2 − 1 B = I  and  X 2 + C ∗ X 2 − 1 C + D ∗ X 1 − 1 D = I , where A , B , C ,  and  D are arbitrary n × n matrices and I is the identity matrix of order n . Some conditions for the existence of a positive-definite solution as well as the convergence analysis of the newly developed algorithm for finding the maximal positive-definite solution and its convergence rate are discussed. Four examples are also provided herein to support our results.

Entropy ◽  
2021 ◽  
Vol 23 (9) ◽  
pp. 1117
Wenxu Gao ◽  
Zhengming Ma ◽  
Weichao Gan ◽  
Shuyu Liu

Symmetric positive definite (SPD) data have become a hot topic in machine learning. Instead of a linear Euclidean space, SPD data generally lie on a nonlinear Riemannian manifold. To get over the problems caused by the high data dimensionality, dimensionality reduction (DR) is a key subject for SPD data, where bilinear transformation plays a vital role. Because linear operations are not supported in nonlinear spaces such as Riemannian manifolds, directly performing Euclidean DR methods on SPD matrices is inadequate and difficult in complex models and optimization. An SPD data DR method based on Riemannian manifold tangent spaces and global isometry (RMTSISOM-SPDDR) is proposed in this research. The main contributions are listed: (1) Any Riemannian manifold tangent space is a Hilbert space isomorphic to a Euclidean space. Particularly for SPD manifolds, tangent spaces consist of symmetric matrices, which can greatly preserve the form and attributes of original SPD data. For this reason, RMTSISOM-SPDDR transfers the bilinear transformation from manifolds to tangent spaces. (2) By log transformation, original SPD data are mapped to the tangent space at the identity matrix under the affine invariant Riemannian metric (AIRM). In this way, the geodesic distance between original data and the identity matrix is equal to the Euclidean distance between corresponding tangent vector and the origin. (3) The bilinear transformation is further determined by the isometric criterion guaranteeing the geodesic distance on high-dimensional SPD manifold as close as possible to the Euclidean distance in the tangent space of low-dimensional SPD manifold. Then, we use it for the DR of original SPD data. Experiments on five commonly used datasets show that RMTSISOM-SPDDR is superior to five advanced SPD data DR algorithms.

2021 ◽  
Vol 29 (3) ◽  
pp. 163-171
Leonídia Alfredo Guimarães

The purpose of this article is to present a theoretical reading of the nucleus of the ego, closer to the theory of the identity matrix by J. L. Moreno, deeply developing the Rojas-Bermúdez’s concept of incipient ego. I associate this egoic experience with the relationships established between the baby and its family nucleus, right after recognizing a minimally stable ego, in neuropsychological conditions to link itself to the maternal, paternal and fraternal images. According to this reading, the nucleus of the ego completes its structuring at the age of four, following the Rojas-Bermúdez School, after carrying out the process of synthesis of this nuclear matrix, from which the natural ego emerges and, later, the social ego, as a product of the triangulation and circularization of roles. The image construction technique is the method that supports the theory in question.

Sign in / Sign up

Export Citation Format

Share Document