scholarly journals Strictly oblique projectors and their properties

2020 ◽  
Vol 24 (5) ◽  
pp. 122-127
Author(s):  
A.M. Vetoshkin ◽  
◽  
A.A. Shum ◽  

In this paper, strictly oblique projectors are defined as projectors that cannot be represented as the sum of two projectors, one of which is a nonzero orthoprojector. A theorem is proved that each projector can be represented in a unique way as the sum of a strictly oblique projector and an orthoprojector. The properties of such projectors are given. For example: if the projector is strictly oblique, then its Hermitian adjoint is also strictly oblique; the rank of a strictly oblique projector is at most n/2, where n is the order of the projector matrix; the property of the projector to be strictly oblique is preserved with a unitary similarity. The work is a continuation of the previous work of the authors, the main result of which is such a matrix expression for an arbitrary projector: where A and B are two matrices of full rank whose columns define range and the null space of this projector. Based on this result, the article shows that the strictly oblique part of any projector P is given by the expression: P(P – P+P)+P. And equality P = P(P – P+P)+P is a criterion that the projector P is a strictly oblique projector. The decomposition of the projector obtained in the work is applied to the practical problem of oblique projection onto the plane

2013 ◽  
Vol 392 ◽  
pp. 660-664
Author(s):  
Zhen Yi Ji ◽  
Wen Yuan Wu ◽  
Yi Li ◽  
Yong Feng

The purpose of this paper is to compute the singular solution of the nonlinear equations arising in power flow system. Based on the approximate null space of the Jacobian matrix, more equations are introduced to the origin system. Meanwhile, the Jacobian matrix of augmented equations at initial value is full rank, then the algorithm recovers quadratic convergence of Newtons iteration. The algorithm in this paper leads to higher accuracy of the singular solution and less iteration steps. In addition, two power flow systems are studied in this paper and the results show this new method has high accuracy and efficiency compared with traditional Newton iteration


Author(s):  
RYUICHI ASHINO ◽  
RÉMI VAILLANCOURT

In correcting a real linear code y = Bx + w by ℓ1 linear programming, where the encoding matrix B ∈ ℝm × n has full rank with m ≥ n and the noise w ∈ ℝm is a sparse random vector, it is numerically observed that the breakdown points of 50% successes in recovering the input vector x ∈ ℝn from the corrupted oversampled measurement y lie on the Donoho–Tanner curves when reflected in their midpoint. The curves of 50% successes in solving underdetermined systems, z = Aw, by ℓ1 linear programming with uniformly distributed compressed sensing matrices A ∈ ℝd × m, where d < m and w is a sparse vector, have been numerically observed and recently shown to coincide with the Donoho–Tanner curves for normally-distributed compressed sensing matrices A derived from geometric combinatorics. When n ≤ m/2, correcting a linear code is faster if done directly by ℓ1 linear programming. However, when n > m/2, to save computing time, this problem can be transformed into an underdetermined compressed sensing problem, Aw = z := Ay, for the syndrome z by a full rank matrix A ∈ ℝd × m, d = m – n, such that AB = 0. For this purpose, to have equivalently high mean breakdown points by ℓ1 linear programming, one can use uniformly distributed random matrices A ∈ ℝ(m-n) × m and matrices B ∈ ℝm × n with orthonormal columns spanning the null space of A. Two exceptional cases have been found. Numerical results are collected in figures and tables.


10.37236/3873 ◽  
2014 ◽  
Vol 21 (2) ◽  
Author(s):  
M.H. Ahmadi ◽  
N. Akhlaghinia ◽  
G.B. Khosrovshahi ◽  
Ch. Maysoori

For integers $0\leq t\leq k\leq v-t$, let $X$ be a $v$-set, and let $W_{tk}(v)$ be a ${v \choose t}\times{v \choose k}$ inclusion matrix where rows and columns are indexed by $t$-subsets and $k$-subsets of $X$, respectively, and for row $T$ and column $K$, $W_{tk}(v)(T,K)=1$ if $T\subseteq K$ and zero otherwise. Since $W_{tk}(v)$ is a full rank matrix, by reordering the columns of $W_{tk}(v)$ we can write $W_{tk}(v) = (S|N)$, where $N$ denotes a set of independent columns of $W_{tk}(v)$. In this paper, first by classifying $t$-subsets and $k$-subsets, we present a new decomposition of $W_{tk}(v)$. Then by employing this decomposition, the Leibniz Triangle, and a known right inverse of $W_{tk}(v)$, we  construct  the inverse of $N$ and consequently special basis for the null space (known as the standard basis) of $W_{tk}(v)$. 


2020 ◽  
Vol 2020 (10) ◽  
pp. 310-1-310-7
Author(s):  
Khalid Omer ◽  
Luca Caucci ◽  
Meredith Kupinski

This work reports on convolutional neural network (CNN) performance on an image texture classification task as a function of linear image processing and number of training images. Detection performance of single and multi-layer CNNs (sCNN/mCNN) are compared to optimal observers. Performance is quantified by the area under the receiver operating characteristic (ROC) curve, also known as the AUC. For perfect detection AUC = 1.0 and AUC = 0.5 for guessing. The Ideal Observer (IO) maximizes AUC but is prohibitive in practice because it depends on high-dimensional image likelihoods. The IO performance is invariant to any fullrank, invertible linear image processing. This work demonstrates the existence of full-rank, invertible linear transforms that can degrade both sCNN and mCNN even in the limit of large quantities of training data. A subsequent invertible linear transform changes the images’ correlation structure again and can improve this AUC. Stationary textures sampled from zero mean and unequal covariance Gaussian distributions allow closed-form analytic expressions for the IO and optimal linear compression. Linear compression is a mitigation technique for high-dimension low sample size (HDLSS) applications. By definition, compression strictly decreases or maintains IO detection performance. For small quantities of training data, linear image compression prior to the sCNN architecture can increase AUC from 0.56 to 0.93. Results indicate an optimal compression ratio for CNN based on task difficulty, compression method, and number of training images.


1983 ◽  
Author(s):  
P. E. Gill ◽  
W. Murray ◽  
M. A. Saunders ◽  
M. H. Wright
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document