graph norm
Recently Published Documents


TOTAL DOCUMENTS

12
(FIVE YEARS 6)

H-INDEX

2
(FIVE YEARS 1)

2021 ◽  
Author(s):  
Chen Mo ◽  
Zhenyao Ye ◽  
Kathryn Hatch ◽  
Yuan Zhang ◽  
Qiong Wu ◽  
...  

Abstract Background: Fine-mapping is an analytical step for causal prioritization of the polymorphic variants in a trait-associated genomic region observed in genome-wide association studies (GWAS). Prioritization of causal variants can be challenging due to linkage disequilibrium (LD) patterns among hundreds to thousands of polymorphisms associated with a trait. Hence, we propose an ℓ0 graph norm shrinkage algorithm to disentangle LD patterns by dense LD blocks consisting of highly correlated single nucleotide polymorphisms (SNPs). We further incorporate the dense LD structure for fine-mapping. Based on graph theory, the concept of "dense" refers to a condition where a block is composed mainly of highly correlated SNPs. We demonstrated the application of our new fine-mapping method using a large UK Biobank (UKBB) sample related to nicotine addiction. We also evaluated and compared its performance with existing fine-mapping algorithms using simulations.Results: Our results suggested that polymorphic variances in both neighboring and distant variants can be consolidated into dense blocks of highly correlated loci. Dense-LD outperformed comparable fine-mapping methods with increased sensitivity and reduced false-positive error rate for causal variant selection. Applying to a UKBB sample, this method replicated the loci reported in previous findings and suggested a strong association with nicotine addiction.Conclusion: We found that the dense LD block structure can guide fine-mapping and accurately determine a parsimonious set of potential causal variants. Our approach is computationally efficient and allows fine-mapping of thousands of polymorphisms.


Author(s):  
Qiong Wu ◽  
Tianzhou Ma ◽  
Qingzhi Liu ◽  
Donald K Milton ◽  
Yuan Zhang ◽  
...  

Abstract Motivation The analysis of gene co-expression network (GCN) is critical in examining the gene-gene interactions and learning the underlying complex yet highly organized gene regulatory mechanisms. Numerous clustering methods have been developed to detect communities of co-expressed genes in the large network. The assumed independent community structure, however, can be oversimplified and may not adequately characterize the complex biological processes. Results We develop a new computational package to extract interconnected communities from gene co-expression network. We consider a pair of communities be interconnected if a subset of genes from one community is correlated with a subset of genes from another community. The interconnected community structure is more flexible and provides a better fit to the empirical co-expression matrix. To overcome the computational challenges, we develop efficient algorithms by leveraging advanced graph norm shrinkage approach. We validate and show the advantage of our method by extensive simulation studies. We then apply our interconnected community detection method to an RNA-seq data from The Cancer Genome Atlas (TCGA) Acute Myeloid Leukemia (AML) study and identify essential interacting biological pathways related to the immune evasion mechanism of tumor cells. Availability The software is available at Github: https://github.com/qwu1221/ICN and Figshare: https://figshare.com/articles/software/ICN-package/13229093. Supplementary information Supplementary data are available at Bioinformatics online.


2020 ◽  
Author(s):  
Chen Mo ◽  
Zhenyao Ye ◽  
Kathryn Hatch ◽  
Yuan Zhang ◽  
Qiong Wu ◽  
...  

AbstractFine-mapping is an analytical step to perform causal prioritization of the polymorphic variants on a trait-associated genomic region observed from genome-wide association studies (GWAS). The prioritization of causal variants can be challenging due to the linkage disequilibrium (LD) patterns among hundreds to thousands of polymorphisms associated with a trait. We propose a novel ℓ0 graph norm shrinkage algorithm to select causal variants from dense LD blocks consisting of highly correlated SNPs that may not be proximal or contiguous. We extract dense LD blocks and perform regression shrinkage to calculate a prioritization score to select a parsimonious set of causal variants. Our approach is computationally efficient and allows performing fine-mapping on thousands of polymorphisms. We demonstrate its application using a large UK Biobank (UKBB) sample related to nicotine addiction. Our results suggest that polymorphic variances in both neighboring and distant variants can be consolidated into dense blocks of highly correlated loci. Simulations were used to evaluate and compare the performance of our method and existing fine-mapping algorithms. The results demonstrated that our method outperformed comparable fine-mapping methods with increased sensitivity and reduced false-positive error rate regarding causal variant selection. The application of this method to smoking severity trait in UKBB sample replicated previously reported loci and suggested the causal prioritization of genetic effects on nicotine dependency.Author summaryDisentangling the complex linkage disequilibrium (LD) pattern and selecting the underlying causal variants have been a long-term challenge for genetic fine-mapping. We find that the LD pattern within GWAS loci is intrinsically organized in delicate graph topological structures, which can be effectively learned by our novel ℓ0 graph norm shrinkage algorithm. The extracted LD graph structure is critical for causal variant selection. Moreover, our method is less constrained by the width of GWAS loci and thus can fine-map a massive number of correlated SNPs.


Author(s):  
Alexander Rieder ◽  
Francisco-Javier Sayas ◽  
Jens Markus Melenk

AbstractWe consider the approximation of an abstract evolution problem with inhomogeneous side constraint using A-stable Runge–Kutta methods. We derive a priori estimates in norms other than the underlying Banach space. Most notably, we derive estimates in the graph norm of the generator. These results are used to study convolution quadrature based discretizations of a wave scattering and a heat conduction problem.


2020 ◽  
Vol 12 (5) ◽  
pp. 67
Author(s):  
Guoguang Lin ◽  
Shuangyan Li

The existence of inertial manifolds for higher-order Kirchhoff type equations with strong damping terms is studied. The Hadamard graph norm conversion method is used for obtaining the existence of inertial manifolds for this kind of equations under certain spectral intervals.


2018 ◽  
Vol 36 (4) ◽  
pp. 1073-1087
Author(s):  
Rachid El Ayadi ◽  
Mohamed Ouzahra

Abstract In this paper, we deal with the distributed bilinear system $ \frac{d z(t)}{d t}= A z(t) + v(t)Bz(t), $ where A is the infinitesimal generator of a semigroup of contractions on a real Hilbert space H. The linear operator B is supposed bounded with respect to the graph norm of A. Then we give sufficient conditions for weak and strong stabilizations. Illustrating examples are provided.


2004 ◽  
Author(s):  
Marcelo R.A.C. Tredinnick ◽  
Marcelo Lopes de Oliveira e Souza
Keyword(s):  

2004 ◽  
Vol 77 (1) ◽  
pp. 73-90 ◽  
Author(s):  
Khalid Latrach ◽  
J. Martin Paoli

AbstractThe purpose of this paper is to provide a detailed treatment of the behaviour of essential spectra of closed densely defined linear operators subjected to additive perturbations not necessarily belonging to any ideal of the algebra of bounded linear operators. IfAdenotes a closed densely defined linear operator on a Banach spaceX, our approach consists principally in considering the class ofA-closable operators which, regarded as operators in ℒ(XA,X) (whereXAdenotes the domain ofAequipped with the graph norm), are contained in the set ofA-Fredholm perturbations (see Definition 1.2). Our results are used to describe the essential spectra of singular neutron transport equations in bounded geometries.


Author(s):  
Richard G. Hills ◽  
Ian H. Leslie

Our increased dependence on mathematical models for engineering design, coupled with our decreased dependence on experimental observation, leads to the obvious question — how do we know that our models are valid representations of physical processes? We test models by comparisons between model predictions and experimental observations. As our models become more complex (i.e., multiphysics models), our ability to test models over the range of possible applications becomes more difficult. This difficulty is compounded by the uncertainty that is invariably present in the experimental data used to test the model, the uncertainties in the parameters that are incorporated into the model, and the uncertainties in the model structure itself. When significant uncertainties of these types are present, evaluating model validity through graphical comparisons of model predictions to experimental observations becomes very subjective. Here we consider the impact of uncertainty and the role of uncertainty analysis in model validation. We focus on uncertainty in the model predictions due to parameter uncertainty, and on experimental uncertainty due to measurement noise. We show that characterizing these uncertainties allows us to use a meaningful metric for model testing that is less subjective than the traditional “view graph norm” or the evaluation of correlation coefficients. We demonstrate this methodology through its application to a model and experimental observations of thermally induced foam decomposition.


Sign in / Sign up

Export Citation Format

Share Document