matrix normal distribution
Recently Published Documents


TOTAL DOCUMENTS

8
(FIVE YEARS 1)

H-INDEX

3
(FIVE YEARS 1)

2018 ◽  
Author(s):  
Jiadong Ji ◽  
Yong He ◽  
Lei Xie

AbstractMotivationNowadays brain connectivity analysis has attracted tremendous attention and has been at the foreground of neuroscience research. Brain functional connectivity reveals the synchronization of brain systems through correlations in neurophysiological measures of brain activity. Growing evidence now suggests that the brain connectivity network experiences alternations with the presence of numerous neurological disorders, thus differential brain network analysis may provides new insights into disease pathologies. For the matrix-valued data in brain connectivity analysis, existing graphical model estimation methods assume a vector normal distribution that in essence requires the columns of the matrix data to be independent. It is obviously not true, they have limited applications. Among the few solutions on graphical model estimation under a matrix normal distribution, none of them tackle the estimation of differential graphs across different populations. This motivates us to consider the differential network for matrix-variate data to detect the brain connectivity alternation.ResultsThe primary interest is to detect spatial locations where the connectivity, in terms of the spatial partial correlation, differ across the two groups. To detect the brain connectivity alternation, we innovatively propose a Matrix-Variate Differential Network (MVDN) model. MVDN assumes that the matrix-variate data follows a matrix-normal distribution. We exploit the D-trace loss function and a Lasso-type penalty to directly estimate the spatial differential partial correlation matrix where the temporal information is fully excavated. We propose an ADMM algorithm for the Lasso penalized D-trace loss optimization problem. We investigate theoretical properties of the estimator. We show that under mild and regular conditions, the proposed method can identify all differential edges accurately with probability tending to 1 in high-dimensional setting where dimensions of matrix-valued data p, q and sample size n are all allowed to go to infinity. Simulation studies demonstrate that MVDN provides more accurate differential network estimation than that achieved by other state-of-the-art methods. We apply MVDN to Electroencephalography (EEG) dataset, which consists of 77 alcoholic individuals and 45 controls. The hub genes and differential interaction patterns identified are consistent with existing experimental [email protected] informationSupplementary data are available online.


Author(s):  
Osval A. Montesinos-López ◽  
Abelardo Montesinos-López ◽  
José Cricelio Montesinos-López ◽  
José Crossa ◽  
Francisco Javier Luna-Vázquez ◽  
...  

2018 ◽  
Vol 33 ◽  
pp. 24-40 ◽  
Author(s):  
Jolanta Pielaszkiewicz ◽  
Dietrich Von Rosen ◽  
Martin Singull

The joint distribution of standardized traces of $\frac{1}{n}XX'$ and of $\Big(\frac{1}{n}XX'\Big)^2$, where the matrix $X:p\times n$ follows a matrix normal distribution is proved asymptotically to be multivariate normal under condition $\frac{{n}}{p}\overset{n,p\rightarrow\infty}{\rightarrow}c>0$. Proof relies on calculations of asymptotic moments and cumulants obtained using a recursive formula derived in Pielaszkiewicz et al. (2015). The covariance matrix of the underlying vector is explicitely given as a function of $n$ and $p$.


Author(s):  
Akihiro Yabe ◽  
Shinji Ito ◽  
Ryohei Fujimaki

The goal of price optimization is to maximize total revenue by adjusting the prices of products, on the basis of predicted sales numbers that are functions of pricing strategies. Recent advances in demand modeling using machine learning raise a new challenge in price optimization, i.e., how to manage statistical errors in estimation. In this paper, we show that uncertainty in recently-proposed prescriptive price optimization frameworks can be represented by a matrix normal distribution. For this particular uncertainty, we propose novel robust quadratic programming algorithms for conservative lower-bound maximization. We offer an asymptotic probabilistic guarantee of conservativeness of our formulation. Our experiments on both artificial and actual price data show that our robust price optimization allows users to determine best risk-return trade-offs and to explore safe, profitable price strategies.


Sign in / Sign up

Export Citation Format

Share Document