rank of a matrix
Recently Published Documents


TOTAL DOCUMENTS

73
(FIVE YEARS 14)

H-INDEX

12
(FIVE YEARS 1)

2022 ◽  
Vol 23 (1) ◽  
pp. 1-35
Author(s):  
Anuj Dawar ◽  
Gregory Wilsenach

Fixed-point logic with rank (FPR) is an extension of fixed-point logic with counting (FPC) with operators for computing the rank of a matrix over a finit field. The expressive power of FPR properly extends that of FPC and is contained in P, but it is not known if that containment is proper. We give a circuit characterization for FPR in terms of families of symmetric circuits with rank gates, along the lines of that for FPC given by Anderson and Dawar in 2017. This requires the development of a broad framework of circuits in which the individual gates compute functions that are not symmetric (i.e., invariant under all permutations of their inputs). This framework also necessitates the development of novel techniques to prove the equivalence of circuits and logic. Both the framework and the techniques are of greater generality than the main result.


Author(s):  
Iskander Aliev ◽  
Gennadiy Averkov ◽  
Jesús A. De Loera ◽  
Timm Oertel

AbstractWe study the sparsity of the solutions to systems of linear Diophantine equations with and without non-negativity constraints. The sparsity of a solution vector is the number of its nonzero entries, which is referred to as the $$\ell _0$$ ℓ 0 -norm of the vector. Our main results are new improved bounds on the minimal $$\ell _0$$ ℓ 0 -norm of solutions to systems $$A\varvec{x}=\varvec{b}$$ A x = b , where $$A\in \mathbb {Z}^{m\times n}$$ A ∈ Z m × n , $${\varvec{b}}\in \mathbb {Z}^m$$ b ∈ Z m and $$\varvec{x}$$ x is either a general integer vector (lattice case) or a non-negative integer vector (semigroup case). In certain cases, we give polynomial time algorithms for computing solutions with $$\ell _0$$ ℓ 0 -norm satisfying the obtained bounds. We show that our bounds are tight. Our bounds can be seen as functions naturally generalizing the rank of a matrix over $$\mathbb {R}$$ R , to other subdomains such as $$\mathbb {Z}$$ Z . We show that these new rank-like functions are all NP-hard to compute in general, but polynomial-time computable for fixed number of variables.


Resonance ◽  
2021 ◽  
Vol 26 (4) ◽  
pp. 575-578
Author(s):  
S. Kesavan
Keyword(s):  

Author(s):  
Maria-Laura Torrente ◽  
Pierpaolo Uberti

AbstractIn the financial framework, the concepts of connectedness and diversification have been introduced and developed respectively in the context of systemic risk and portfolio theory. In this paper we propose a theoretical approach to bring to light the relation between connectedness and diversification. Starting from the respective axiomatic definitions, we prove that a class of proper measures of connectedness verifies, after a suitable functional transformation, the axiomatic requirements for a measure of diversification. The core idea of the paper is that connectedness and diversification are so deeply related that it is possible to pass from one concept to the other. In order to exploit such correspondence, we introduce a function, depending on the classical notion of rank of a matrix, that transforms a suitable proper measure of connectedness in a measure of diversification. We point out general properties of the proposed transformation function and apply it to a selection of measures of connectedness, such as the well-known Variance Inflation Factor.


2021 ◽  
Vol 65 (1) ◽  
pp. 11-14
Author(s):  
András Recski ◽  
Áron Vékássy

The genericity assumption, supposing that the nonzero parameters of a system are algebraically independent transcendentals over the field of the rationals, often helps for the mathematical modelling of linear systems. Without this condition nonzero expansion members of a determinant can cancel out each other, decreasing the rank of a matrix. In this note we show that under some circumstances an increase is also possible. This counterintuitive phenomenon is explained using some tools from matroid theory, and is illustrated by a classical network of Carlin and Youla.


2021 ◽  
Vol 5 (3) ◽  
pp. 526-551
Author(s):  
António Pedro Goucha ◽  
Joa͂o Gouveia
Keyword(s):  

Geophysics ◽  
2020 ◽  
pp. 1-60
Author(s):  
Ouyang Shao ◽  
Lingling Wang ◽  
Xiangyun Hu ◽  
Zhidan Long

Because there are many similar geological structures underground, seismic profiles have an abundance of self-repeating patterns. Thus, we can divide a seismic profile into groups of blocks with similar seismic structure. The matrix formed by stacking together similar blocks in each group should be of low rank. Hence, we can transfer the seismic denoising problem to a serial of low-rank matrix approximation (LRMA) problem. The LRMA-based model commonly adopts the nuclear norm as a convex substitute of the rank of a matrix. However, the nuclear norm minimization (NNM) shrinks the different rank components equally and may cause some biases in practice. Recently introduced truncated nuclear norm (TNN) has been proven to more accurately approximate the rank of a matrix, which is given by the sum of the set of smallest singular values. Based on this, we propose a novel denoising method using truncated nuclear norm minimization (TNNM). The objective function of this method consists of two terms, the F-norm data fidelity and a truncated nuclear norm regularization. We present an efficient two-step iterative algorithm to solve this objective function. Then, we apply the proposed TNNM algorithm to groups of blocks with similar seismic structure, and aggregate all resulting denoised blocks to get the denoised seismic data. We update the denoised results during each iteration to gradually attenuate the heavy noise. Numerical experiments demonstrate that, compared with FX-Decon, the curvelet, and the NNM-based methods, TNNM not only attenuates noise more effectively even when the SNR is as low as -10 dB and seismic data have complex structures, but also accurately preserves the seismic structures without inducing Gibbs artifacts.


Biometrika ◽  
2020 ◽  
Author(s):  
Wei Luo ◽  
Bing Li

Summary In many dimension reduction problems in statistics and machine learning, such as in principal component analysis, canonical correlation analysis, independent component analysis and sufficient dimension reduction, it is important to determine the dimension of the reduced predictor, which often amounts to estimating the rank of a matrix. This problem is called order determination. In this article, we propose a novel and highly effective order-determination method based on the idea of predictor augmentation. We show that if the predictor is augmented by an artificially generated random vector, then the parts of the eigenvectors of the matrix induced by the augmentation display a pattern that reveals information about the order to be determined. This information, when combined with the information provided by the eigenvalues of the matrix, greatly enhances the accuracy of order determination.


Sign in / Sign up

Export Citation Format

Share Document