inverse covariance matrix
Recently Published Documents


TOTAL DOCUMENTS

55
(FIVE YEARS 13)

H-INDEX

11
(FIVE YEARS 1)

Author(s):  
Nikita K. Zvonarev ◽  

The problem of weighted finite-rank time-series approximation is considered for signal estimation in “signal plus noise” model, where the inverse covariance matrix of noise is (2p+1)-diagonal. Finding of weights, which improve the estimation accuracy, is examined. An effective method for the numerical search of the weights is constructed and proved. Numerical simulations are performed to study the improvement of the estimation accuracy for several noise models.


Author(s):  
Dzung T. Phan ◽  
Matt Menickelly

The sparse inverse covariance matrix is used to model conditional dependencies between variables in a graphical model to fit a multivariate Gaussian distribution. Estimating the matrix from data are well known to be computationally expensive for large-scale problems. Sparsity is employed to handle noise in the data and to promote interpretability of a learning model. Although the use of a convex ℓ1 regularizer to encourage sparsity is common practice, the combinatorial ℓ0 penalty often has more favorable statistical properties. In this paper, we directly constrain sparsity by specifying a maximally allowable number of nonzeros, in other words, by imposing an ℓ0 constraint. We introduce an efficient approximate Newton algorithm using warm starts for solving the nonconvex ℓ0-constrained inverse covariance learning problem. Numerical experiments on standard data sets show that the performance of the proposed algorithm is competitive with state-of-the-art methods. Summary of Contribution: The inverse covariance estimation problem underpins many domains, including statistics, operations research, and machine learning. We propose a scalable optimization algorithm for solving the nonconvex ℓ0-constrained problem.


2020 ◽  
Vol 39 (13) ◽  
pp. 1473-1502
Author(s):  
Timothy D Barfoot ◽  
James R Forbes ◽  
David J Yoon

We present a Gaussian variational inference (GVI) technique that can be applied to large-scale nonlinear batch state estimation problems. The main contribution is to show how to fit both the mean and (inverse) covariance of a Gaussian to the posterior efficiently, by exploiting factorization of the joint likelihood of the state and data, as is common in practical problems. This is different than maximum a posteriori (MAP) estimation, which seeks the point estimate for the state that maximizes the posterior (i.e., the mode). The proposed exactly sparse Gaussian variational inference (ESGVI) technique stores the inverse covariance matrix, which is typically very sparse (e.g., block-tridiagonal for classic state estimation). We show that the only blocks of the (dense) covariance matrix that are required during the calculations correspond to the non-zero blocks of the inverse covariance matrix, and further show how to calculate these blocks efficiently in the general GVI problem. ESGVI operates iteratively, and while we can use analytical derivatives at each iteration, Gaussian cubature can be substituted, thereby producing an efficient derivative-free batch formulation. ESGVI simplifies to precisely the Rauch–Tung–Striebel (RTS) smoother in the batch linear estimation case, but goes beyond the ‘extended’ RTS smoother in the nonlinear case because it finds the best-fit Gaussian (mean and covariance), not the MAP point estimate. We demonstrate the technique on controlled simulation problems and a batch nonlinear simultaneous localization and mapping problem with an experimental dataset.


2020 ◽  
Vol 18 (04) ◽  
pp. 2050023
Author(s):  
O. Chatrabgoun ◽  
A. Hosseinian-Far ◽  
A. Daneshkhah

Many biological and biomedical research areas such as drug design require analyzing the Gene Regulatory Networks (GRNs) to provide clear insight and understanding of the cellular processes in live cells. Under normality assumption for the genes, GRNs can be constructed by assessing the nonzero elements of the inverse covariance matrix. Nevertheless, such techniques are unable to deal with non-normality, multi-modality and heavy tailedness that are commonly seen in current massive genetic data. To relax this limitative constraint, one can apply copula function which is a multivariate cumulative distribution function with uniform marginal distribution. However, since the dependency structures of different pairs of genes in a multivariate problem are very different, the regular multivariate copula will not allow for the construction of an appropriate model. The solution to this problem is using Pair-Copula Constructions (PCCs) which are decompositions of a multivariate density into a cascade of bivariate copula, and therefore, assign different bivariate copula function for each local term. In fact, in this paper, we have constructed inverse covariance matrix based on the use of PCCs when the normality assumption can be moderately or severely violated for capturing a wide range of distributional features and complex dependency structure. To learn the non-Gaussian model for the considered GRN with non-Gaussian genomic data, we apply modified version of copula-based PC algorithm in which normality assumption of marginal densities is dropped. This paper also considers the Dynamic Time Warping (DTW) algorithm to determine the existence of a time delay relation between two genes. Breast cancer is one of the most common diseases in the world where GRN analysis of its subtypes is considerably important; Since by revealing the differences in the GRNs of these subtypes, new therapies and drugs can be found. The findings of our research are used to construct GRNs with high performance, for various subtypes of breast cancer rather than simply using previous models.


2019 ◽  
Vol 55 (8) ◽  
pp. 2700-2731
Author(s):  
Fangquan Shi ◽  
Lianjie Shu ◽  
Aijun Yang ◽  
Fangyi He

In portfolio risk minimization, the inverse covariance matrix of returns is often unknown and has to be estimated in practice. Yet the eigenvalues of the sample covariance matrix are often overdispersed, leading to severe estimation errors in the inverse covariance matrix. To deal with this problem, we propose a general framework by shrinking the sample eigenvalues based on the Schatten norm. The proposed framework has the advantage of being computationally efficient as well as structure-free. The comparative studies show that our approach behaves reasonably well in terms of reducing out-of-sample portfolio risk and turnover.


Sign in / Sign up

Export Citation Format

Share Document