A general method for extending a lossy network to a lossless network using the matrix decomposition

Author(s):  
Hao Chi Zhang ◽  
Pei Hang He ◽  
Yu Luo ◽  
Tie Jun Cui
1991 ◽  
Vol 124 (1) ◽  
pp. K11-K14 ◽  
Author(s):  
C. Dos Santos Lourenço ◽  
M. Cilense ◽  
W. Garlipp

Author(s):  
David Barber

Finding clusters of well-connected nodes in a graph is a problem common to many domains, including social networks, the Internet and bioinformatics. From a computational viewpoint, finding these clusters or graph communities is a difficult problem. We use a clique matrix decomposition based on a statistical description that encourages clusters to be well connected and few in number. The formal intractability of inferring the clusters is addressed using a variational approximation inspired by mean-field theories in statistical mechanics. Clique matrices also play a natural role in parametrizing positive definite matrices under zero constraints on elements of the matrix. We show that clique matrices can parametrize all positive definite matrices restricted according to a decomposable graph and form a structured factor analysis approximation in the non-decomposable case. Extensions to conjugate Bayesian covariance priors and more general non-Gaussian independence models are briefly discussed.


2017 ◽  
Vol 2017 ◽  
pp. 1-8 ◽  
Author(s):  
Xiao-min Chen ◽  
Jun-xu Su ◽  
Qiu-ming Zhu ◽  
Xu-jun Hu ◽  
Zhu Fang

The aim of this paper is to investigate a linear precoding scheme design for a multiple-input multiple-output two-way relay system with imperfect channel state information. The scheme design is simplified as an optimal problem with precoding matrix variables, which is deduced with the maximum power constraint at the relay station based on the minimum mean square error criterion. With channel feedback delay at both ends of the channel and the channel estimation errors being taken into account, we propose a matrix decomposition scheme and a joint iterative scheme to minimize the average sum mean square error. The matrix decomposition method is used to derive the closed form of the relay matrix, and the joint iterative algorithm is used to optimize the precoding matrix and the processing matrix. According to numerical simulation results, the matrix decomposition scheme reduces the system bit error rate (BER) effectively and the joint iterative scheme achieves the best performance of BER against existing methods.


2019 ◽  
Vol 35 (22) ◽  
pp. 4748-4753 ◽  
Author(s):  
Ahmad Borzou ◽  
Razie Yousefi ◽  
Rovshan G Sadygov

Abstract Motivation High throughput technologies are widely employed in modern biomedical research. They yield measurements of a large number of biomolecules in a single experiment. The number of experiments usually is much smaller than the number of measurements in each experiment. The simultaneous measurements of biomolecules provide a basis for a comprehensive, systems view for describing relevant biological processes. Often it is necessary to determine correlations between the data matrices under different conditions or pathways. However, the techniques for analyzing the data with a low number of samples for possible correlations within or between conditions are still in development. Earlier developed correlative measures, such as the RV coefficient, use the trace of the product of data matrices as the most relevant characteristic. However, a recent study has shown that the RV coefficient consistently overestimates the correlations in the case of low sample numbers. To correct for this bias, it was suggested to discard the diagonal elements of the outer products of each data matrix. In this work, a principled approach based on the matrix decomposition generates three trace-independent parts for every matrix. These components are unique, and they are used to determine different aspects of correlations between the original datasets. Results Simulations show that the decomposition results in the removal of high correlation bias and the dependence on the sample number intrinsic to the RV coefficient. We then use the correlations to analyze a real proteomics dataset. Availability and implementation The python code can be downloaded from http://dynamic-proteome.utmb.edu/MatrixCorrelations.aspx. Supplementary information Supplementary data are available at Bioinformatics online.


2014 ◽  
Vol 889-890 ◽  
pp. 1730-1736
Author(s):  
Nong Zheng

Using the matrix compression algorithm in the network education platform for the user information security certification is a good way. The sensitive user information is transfer in an open channel and it can be authentication for using the matrix compression/decompression, matrix decomposition/reduction algorithm. The client conduct random capture and compression by the user information been divided into a number of rectangular block size, corresponding to generate a key and cipher text. The server take out the corresponding data from the data queue of receiving and to extract In accordance with the key rules, then to reduce of information in corresponding positions. Thus the user identity information can be authenticity and integrity verification.


2013 ◽  
Vol 2013 ◽  
pp. 1-8 ◽  
Author(s):  
Jengnan Tzeng

The singular value decomposition (SVD) is a fundamental matrix decomposition in linear algebra. It is widely applied in many modern techniques, for example, high- dimensional data visualization, dimension reduction, data mining, latent semantic analysis, and so forth. Although the SVD plays an essential role in these fields, its apparent weakness is the order three computational cost. This order three computational cost makes many modern applications infeasible, especially when the scale of the data is huge and growing. Therefore, it is imperative to develop a fast SVD method in modern era. If the rank of matrix is much smaller than the matrix size, there are already some fast SVD approaches. In this paper, we focus on this case but with the additional condition that the data is considerably huge to be stored as a matrix form. We will demonstrate that this fast SVD result is sufficiently accurate, and most importantly it can be derived immediately. Using this fast method, many infeasible modern techniques based on the SVD will become viable.


2009 ◽  
Vol 19 (03) ◽  
pp. 231-246
Author(s):  
XIAODONG WU ◽  
XIN DOU ◽  
JOHN E. BAYOUTH ◽  
JOHN M. BUATTI

In this paper, we study an interesting matrix decomposition problem that seeks to decompose a "complicated" matrix into two "simpler" matrices while minimizing the sum of the horizontal complexity of the first sub-matrix and the vertical complexity of the second sub-matrix. The matrix decomposition problem is crucial for improving the "step-and-shoot" delivery efficiency in Intensity-Modulated Radiation Therapy, which aims to deliver a highly conformal radiation dose to a target tumor while sparing the surrounding normal tissues. Our algorithm is based on a non-trivial graph construction scheme, which enables us to formulate the decomposition problem as computing a minimum s-t cut in a 3-D geometric multi-pillar graph. Experiments on randomly generated intensity map matrices and on clinical data demonstrated the efficacy of our algorithm.


Sign in / Sign up

Export Citation Format

Share Document