scholarly journals Correction: Assessing the Goodness of Fit of Phylogenetic Comparative Methods: A Meta-Analysis and Simulation Study

PLoS ONE ◽  
2014 ◽  
Vol 9 (1) ◽  
Author(s):  
Dwueng-Chwuan Jhwueng
2020 ◽  
Vol 17 (170) ◽  
pp. 20200471 ◽  
Author(s):  
Hamish C. Craig ◽  
Dakota Piorkowski ◽  
Shinichi Nakagawa ◽  
Michael M. Kasumovic ◽  
Sean J. Blamires

Spider major ampullate (MA) silk, with its combination of strength and extensibility, outperforms any synthetic equivalents. There is thus much interest in understanding its underlying materiome. While the expression of the different silk proteins (spidroins) appears an integral component of silk performance, our understanding of the nature of the relationship between the spidroins, their constituent amino acids and MA silk mechanics is ambiguous. To provide clarity on these relationships across spider species, we performed a meta-analysis using phylogenetic comparative methods. These showed that glycine and proline, both of which are indicators of differential spidroin expression, had effects on MA silk mechanics across the phylogeny. We also found serine to correlate with silk mechanics, probably via its presence within the carboxyl and amino-terminal domains of the spidroins. From our analyses, we concluded that the spidroin expression shifts across the phylogeny from predominantly MaSp1 in the MA silks of ancestral spiders to predominantly MaSp2 in the more derived spiders' silks. This trend was accompanied by an enhanced ultimate strain and decreased Young's modulus in the silks. Our meta-analysis enabled us to decipher between real and apparent influences on MA silk properties, providing significant insights into spider silk and web coevolution and enhancing our capacity to create spider silk-like materials.


Modelling ◽  
2021 ◽  
Vol 2 (1) ◽  
pp. 78-104
Author(s):  
Vasili B. V. Nagarjuna ◽  
R. Vishnu Vardhan ◽  
Christophe Chesneau

Every day, new data must be analysed as well as possible in all areas of applied science, which requires the development of attractive statistical models, that is to say adapted to the context, easy to use and efficient. In this article, we innovate in this direction by proposing a new statistical model based on the functionalities of the sinusoidal transformation and power Lomax distribution. We thus introduce a new three-parameter survival distribution called sine power Lomax distribution. In a first approach, we present it theoretically and provide some of its significant properties. Then the practicality, utility and flexibility of the sine power Lomax model are demonstrated through a comprehensive simulation study, and the analysis of nine real datasets mainly from medicine and engineering. Based on relevant goodness of fit criteria, it is shown that the sine power Lomax model has a better fit to some of the existing Lomax-like distributions.


2021 ◽  
pp. 263208432199622
Author(s):  
Tim Mathes ◽  
Oliver Kuss

Background Meta-analysis of systematically reviewed studies on interventions is the cornerstone of evidence based medicine. In the following, we will introduce the common-beta beta-binomial (BB) model for meta-analysis with binary outcomes and elucidate its equivalence to panel count data models. Methods We present a variation of the standard “common-rho” BB (BBST model) for meta-analysis, namely a “common-beta” BB model. This model has an interesting connection to fixed-effect negative binomial regression models (FE-NegBin) for panel count data. Using this equivalence, it is possible to estimate an extension of the FE-NegBin with an additional multiplicative overdispersion term (RE-NegBin), while preserving a closed form likelihood. An advantage due to the connection to econometric models is, that the models can be easily implemented because “standard” statistical software for panel count data can be used. We illustrate the methods with two real-world example datasets. Furthermore, we show the results of a small-scale simulation study that compares the new models to the BBST. The input parameters of the simulation were informed by actually performed meta-analysis. Results In both example data sets, the NegBin, in particular the RE-NegBin showed a smaller effect and had narrower 95%-confidence intervals. In our simulation study, median bias was negligible for all methods, but the upper quartile for median bias suggested that BBST is most affected by positive bias. Regarding coverage probability, BBST and the RE-NegBin model outperformed the FE-NegBin model. Conclusion For meta-analyses with binary outcomes, the considered common-beta BB models may be valuable extensions to the family of BB models.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Steve Kanters ◽  
Mohammad Ehsanul Karim ◽  
Kristian Thorlund ◽  
Aslam Anis ◽  
Nick Bansback

Abstract Background The use of individual patient data (IPD) in network meta-analyses (NMA) is rapidly growing. This study aimed to determine, through simulations, the impact of select factors on the validity and precision of NMA estimates when combining IPD and aggregate data (AgD) relative to using AgD only. Methods Three analysis strategies were compared via simulations: 1) AgD NMA without adjustments (AgD-NMA); 2) AgD NMA with meta-regression (AgD-NMA-MR); and 3) IPD-AgD NMA with meta-regression (IPD-NMA). We compared 108 parameter permutations: number of network nodes (3, 5 or 10); proportion of treatment comparisons informed by IPD (low, medium or high); equal size trials (2-armed with 200 patients per arm) or larger IPD trials (500 patients per arm); sparse or well-populated networks; and type of effect-modification (none, constant across treatment comparisons, or exchangeable). Data were generated over 200 simulations for each combination of parameters, each using linear regression with Normal distributions. To assess model performance and estimate validity, the mean squared error (MSE) and bias of treatment-effect and covariate estimates were collected. Standard errors (SE) and percentiles were used to compare estimate precision. Results Overall, IPD-NMA performed best in terms of validity and precision. The median MSE was lower in the IPD-NMA in 88 of 108 scenarios (similar results otherwise). On average, the IPD-NMA median MSE was 0.54 times the median using AgD-NMA-MR. Similarly, the SEs of the IPD-NMA treatment-effect estimates were 1/5 the size of AgD-NMA-MR SEs. The magnitude of superior validity and precision of using IPD-NMA varied across scenarios and was associated with the amount of IPD. Using IPD in small or sparse networks consistently led to improved validity and precision; however, in large/dense networks IPD tended to have negligible impact if too few IPD were included. Similar results also apply to the meta-regression coefficient estimates. Conclusions Our simulation study suggests that the use of IPD in NMA will considerably improve the validity and precision of estimates of treatment effect and regression coefficients in the most NMA IPD data-scenarios. However, IPD may not add meaningful validity and precision to NMAs of large and dense treatment networks when negligible IPD are used.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Janharpreet Singh ◽  
Keith R. Abrams ◽  
Sylwia Bujkiewicz

Abstract Background Use of real world data (RWD) from non-randomised studies (e.g. single-arm studies) is increasingly being explored to overcome issues associated with data from randomised controlled trials (RCTs). We aimed to compare methods for pairwise meta-analysis of RCTs and single-arm studies using aggregate data, via a simulation study and application to an illustrative example. Methods We considered contrast-based methods proposed by Begg & Pilote (1991) and arm-based methods by Zhang et al (2019). We performed a simulation study with scenarios varying (i) the proportion of RCTs and single-arm studies in the synthesis (ii) the magnitude of bias, and (iii) between-study heterogeneity. We also applied methods to data from a published health technology assessment (HTA), including three RCTs and 11 single-arm studies. Results Our simulation study showed that the hierarchical power and commensurate prior methods by Zhang et al provided a consistent reduction in uncertainty, whilst maintaining over-coverage and small error in scenarios where there was limited RCT data, bias and differences in between-study heterogeneity between the two sets of data. The contrast-based methods provided a reduction in uncertainty, but performed worse in terms of coverage and error, unless there was no marked difference in heterogeneity between the two sets of data. Conclusions The hierarchical power and commensurate prior methods provide the most robust approach to synthesising aggregate data from RCTs and single-arm studies, balancing the need to account for bias and differences in between-study heterogeneity, whilst reducing uncertainty in estimates. This work was restricted to considering a pairwise meta-analysis using aggregate data.


2015 ◽  
Vol 11 (7) ◽  
pp. 20150506 ◽  
Author(s):  
John J. Wiens

The major clades of vertebrates differ dramatically in their current species richness, from 2 to more than 32 000 species each, but the causes of this variation remain poorly understood. For example, a previous study noted that vertebrate clades differ in their diversification rates, but did not explain why they differ. Using a time-calibrated phylogeny and phylogenetic comparative methods, I show that most variation in diversification rates among 12 major vertebrate clades has a simple ecological explanation: predominantly terrestrial clades (i.e. birds, mammals, and lizards and snakes) have higher net diversification rates than predominantly aquatic clades (i.e. amphibians, crocodilians, turtles and all fish clades). These differences in diversification rates are then strongly related to patterns of species richness. Habitat may be more important than other potential explanations for richness patterns in vertebrates (such as climate and metabolic rates) and may also help explain patterns of species richness in many other groups of organisms.


2020 ◽  
Author(s):  
Frank Weber ◽  
Guido Knapp ◽  
Anne Glass ◽  
Günther Kundt ◽  
Katja Ickstadt

There exists a variety of interval estimators for the overall treatment effect in a random-effects meta-analysis. A recent literature review summarizing existing methods suggested that in most situations, the Hartung-Knapp/Sidik-Jonkman (HKSJ) method was preferable. However, a quantitative comparison of those methods in a common simulation study is still lacking. Thus, we conduct such a simulation study for continuous and binary outcomes, focusing on the medical field for application.Based on the literature review and some new theoretical considerations, a practicable number of interval estimators is selected for this comparison: the classical normal-approximation interval using the DerSimonian-Laird heterogeneity estimator, the HKSJ interval using either the Paule-Mandel or the Sidik-Jonkman heterogeneity estimator, the Skovgaard higher-order profile likelihood interval, a parametric bootstrap interval, and a Bayesian interval using different priors. We evaluate the performance measures (coverage and interval length) at specific points in the parameter space, i.e. not averaging over a prior distribution. In this sense, our study is conducted from a frequentist point of view.We confirm the main finding of the literature review, the general recommendation of the HKSJ method (here with the Sidik-Jonkman heterogeneity estimator). For meta-analyses including only 2 studies, the high length of the HKSJ interval limits its practical usage. In this case, the Bayesian interval using a weakly informative prior for the heterogeneity may help. Our recommendations are illustrated using a real-world meta-analysis dealing with the efficacy of an intramyocardial bone marrow stem cell transplantation during coronary artery bypass grafting.


Sign in / Sign up

Export Citation Format

Share Document