scholarly journals Towards Degree Distribution of a Duplication-Divergence Graph Model

10.37236/9251 ◽  
2021 ◽  
Vol 28 (1) ◽  
Author(s):  
Krzysztof Turowski ◽  
Wojciech Szpankowski

We present a rigorous and precise analysis of degree distribution in a dynamic graph model introduced by Solé, Pastor-Satorras et al. in which nodes are added according to a duplication-divergence mechanism. This model is discussed in numerous publications with only very few recent rigorous results, especially for the degree distribution. In this paper we focus on two related problems: the expected value and variance of the degree of a given node over the evolution of the graph and the expected value and variance of the average degree over all nodes. We present exact and precise asymptotic results showing that both quantities may decrease or increase over time depending on the model parameters. Our findings are a step towards a better understanding of the graph behaviors such as degree distributions, symmetry, power law, and structural compression.

2020 ◽  
Author(s):  
Murad Megjhani ◽  
Kalijah Terilli ◽  
Ayham Alkhachroum ◽  
David J. Roh ◽  
Sachin Agarwal ◽  
...  

AbstractObjectiveTo develop a machine learning based tool, using routine vital signs, to assess delayed cerebral ischemia (DCI) risk over time.MethodsIn this retrospective analysis, physiologic data for 540 consecutive acute subarachnoid hemorrhage patients were collected and annotated as part of a prospective observational cohort study between May 2006 and December 2014. Patients were excluded if (i) no physiologic data was available, (ii) they expired prior to the DCI onset window (< post bleed day 3) or (iii) early angiographic vasospasm was detected on admitting angiogram. DCI was prospectively labeled by consensus of treating physicians. Occurrence of DCI was classified using various machine learning approaches including logistic regression, random forest, support vector machine (linear and kernel), and an ensemble classifier, trained on vitals and subject characteristic features. Hourly risk scores were generated as the posterior probability at time t. We performed five-fold nested cross validation to tune the model parameters and to report the accuracy. All classifiers were evaluated for good discrimination using the area under the receiver operating characteristic curve (AU-ROC) and confusion matrices.ResultsOf 310 patients included in our final analysis, 101 (32.6%) patients developed DCI. We achieved maximal classification of 0.81 [0.75-0.82] AU-ROC. We also predicted 74.7 % of all DCI events 12 hours before typical clinical detection with a ratio of 3 true alerts for every 2 false alerts.ConclusionA data-driven machine learning based detection tool offered hourly assessments of DCI risk and incorporated new physiologic information over time.


Author(s):  
Yinan Zhang ◽  
Yong Liu ◽  
Peng Han ◽  
Chunyan Miao ◽  
Lizhen Cui ◽  
...  

Cross-domain recommendation methods usually transfer knowledge across different domains implicitly, by sharing model parameters or learning parameter mappings in the latent space. Differing from previous studies, this paper focuses on learning explicit mapping between a user's behaviors (i.e. interaction itemsets) in different domains during the same temporal period. In this paper, we propose a novel deep cross-domain recommendation model, called Cycle Generation Networks (CGN). Specifically, CGN employs two generators to construct the dual-direction personalized itemset mapping between a user's behaviors in two different domains over time. The generators are learned by optimizing the distance between the generated itemset and the real interacted itemset, as well as the cycle-consistent loss defined based on the dual-direction generation procedure. We have performed extensive experiments on real datasets to demonstrate the effectiveness of the proposed model, comparing with existing single-domain and cross-domain recommendation methods.


2013 ◽  
Vol 29 (4) ◽  
pp. 435-442 ◽  
Author(s):  
Seamus Kent ◽  
Andrew Briggs ◽  
Simon Eckermann ◽  
Colin Berry

Objectives: The use of value of information methods to inform trial design has been widely advocated but there have been few empirical applications of these methods and there is little evidence they are widely used in decision making. This study considers the usefulness of value of information models in the context of a real clinical decision problem relating to alternative diagnostic strategies for patients with a recent non-ST elevated myocardial infarction.Methods: A pretrial economic model is constructed to consider the cost-effectiveness of two competing strategies: coronary angiography alone or in conjunction with fractional flow reserve measurement. A closed-form solution to the expected benefits of information is used with optimal sample size estimated for a range of models reflecting increasingly realistic assumptions and alternative decision contexts.Results: Fractional flow reserve measurement is expected to be cost-effective with an incremental cost-effectiveness ratio of GBP 1,621, however, there is considerable uncertainty in this estimate and consequently a large expected value to reducing this uncertainty via a trial. The recommended sample size is strongly affected by the reality of the assumptions of the expected value of information (EVI) model and the decision context.Conclusions: Value of information models can provide a simple and flexible approach to clinical trial design and are more consistent with the constraints and objectives of the healthcare system than traditional frequentist approaches. However, the variation in sample size estimates demonstrates that it is essential that appropriate model parameters and decision contexts are used in their application.


2012 ◽  
Vol 94 (2) ◽  
pp. 85-95 ◽  
Author(s):  
JUN XING ◽  
JIAHAN LI ◽  
RUNQING YANG ◽  
XIAOJING ZHOU ◽  
SHIZHONG XU

SummaryOwing to their ability and flexibility to describe individual gene expression at different time points, random regression (RR) analyses have become a popular procedure for the genetic analysis of dynamic traits whose phenotypes are collected over time. Specifically, when modelling the dynamic patterns of gene expressions in the RR framework, B-splines have been proved successful as an alternative to orthogonal polynomials. In the so-called Bayesian B-spline quantitative trait locus (QTL) mapping, B-splines are used to characterize the patterns of QTL effects and individual-specific time-dependent environmental errors over time, and the Bayesian shrinkage estimation method is employed to estimate model parameters. Extensive simulations demonstrate that (1) in terms of statistical power, Bayesian B-spline mapping outperforms the interval mapping based on the maximum likelihood; (2) for the simulated dataset with complicated growth curve simulated by B-splines, Legendre polynomial-based Bayesian mapping is not capable of identifying the designed QTLs accurately, even when higher-order Legendre polynomials are considered and (3) for the simulated dataset using Legendre polynomials, the Bayesian B-spline mapping can find the same QTLs as those identified by Legendre polynomial analysis. All simulation results support the necessity and flexibility of B-spline in Bayesian mapping of dynamic traits. The proposed method is also applied to a real dataset, where QTLs controlling the growth trajectory of stem diameters in Populus are located.


Proceedings ◽  
2020 ◽  
Vol 46 (1) ◽  
pp. 9
Author(s):  
Abu Mohamed Alhasan

A graph-model is presented to describe multilevel atomic structure. As an example, we take the double Λ configuration in alkali-metal atoms with hyperfine structure and nuclear spin I = 3 / 2 , as a graph with four vertices. Links are treated as coherence. We introduce the transition matrix which describes the connectivity matrix in static graph-model. In general, the transition matrix describes spatiotemporal behavior of the dynamic graph-model. Furthermore, it describes multiple connections and self-looping of vertices. The atomic excitation is made by short pulses, in order that the hyperfine structure is well resolved. Entropy associated with the proposed dynamic graph-model is used to identify transitions as well as local stabilization in the system without invoking the energy concept of the propagated pulses.


2002 ◽  
Vol 14 (1-2) ◽  
pp. 113-132 ◽  
Author(s):  
Nils Lid Hjort ◽  
Alexander Koning
Keyword(s):  

2015 ◽  
Vol 26 (09) ◽  
pp. 1550107 ◽  
Author(s):  
Zhenxiang Gao ◽  
Yan Shi ◽  
Shanzhi Chen

Mobile social networks exploit human mobility and consequent device-to-device contact to opportunistically create data paths over time. While links in mobile social networks are time-varied and strongly impacted by human mobility, discovering influential nodes is one of the important issues for efficient information propagation in mobile social networks. Although traditional centrality definitions give metrics to identify the nodes with central positions in static binary networks, they cannot effectively identify the influential nodes for information propagation in mobile social networks. In this paper, we address the problems of discovering the influential nodes in mobile social networks. We first use the temporal evolution graph model which can more accurately capture the topology dynamics of the mobile social network over time. Based on the model, we explore human social relations and mobility patterns to redefine three common centrality metrics: degree centrality, closeness centrality and betweenness centrality. We then employ empirical traces to evaluate the benefits of the proposed centrality metrics, and discuss the predictability of nodes' global centrality ranking by nodes' local centrality ranking. Results demonstrate the efficiency of the proposed centrality metrics.


10.3982/qe989 ◽  
2019 ◽  
Vol 10 (3) ◽  
pp. 1019-1068 ◽  
Author(s):  
Sukjin Han ◽  
Adam McCloskey

This paper develops extremum estimation and inference results for nonlinear models with very general forms of potential identification failure when the source of this identification failure is known. We examine models that may have a general deficient rank Jacobian in certain parts of the parameter space. When identification fails in one of these models, it becomes underidentified and the identification status of individual parameters is not generally straightforward to characterize. We provide a systematic reparameterization procedure that leads to a reparametrized model with straightforward identification status. Using this reparameterization, we determine the asymptotic behavior of standard extremum estimators and Wald statistics under a comprehensive class of parameter sequences characterizing the strength of identification of the model parameters, ranging from nonidentification to strong identification. Using the asymptotic results, we propose hypothesis testing methods that make use of a standard Wald statistic and data‐dependent critical values, leading to tests with correct asymptotic size regardless of identification strength and good power properties. Importantly, this allows one to directly conduct uniform inference on low‐dimensional functions of the model parameters, including one‐dimensional subvectors. The paper illustrates these results in three examples: a sample selection model, a triangular threshold crossing model, and a collective model for household expenditures.


2021 ◽  
Author(s):  
Oliver Lüdtke ◽  
Alexander Robitzsch ◽  
Esther Ulitzsch

The bivariate Stable Trait, AutoRegressive Trait, and State (STARTS) model provides a general approach for estimating reciprocal effects between constructs over time. However, previous research has shown that this model is difficult to estimate using the maximum likelihood (ML) method (e.g., nonconvergence). In this article, we introduce a Bayesian approach for estimating the bivariate STARTS model and implement it in the software Stan. We discuss issues of model parameterization and show how appropriate prior distributions for model parameters can be selected. Specifically, we propose the four-parameter beta distribution as a flexible prior distribution for the autoregressive and cross-lagged effects. Using a simulation study, we show that the proposed Bayesian approach provides more accurate estimates than ML estimation in challenging data constellations. An example is presented to illustrate how the Bayesian approach can be used to stabilize the parameter estimates of the bivariate STARTS model.


Sign in / Sign up

Export Citation Format

Share Document