scholarly journals Coordinate based meta-analysis of networks in neuroimaging studies

2018 ◽  
Author(s):  
CR Tench ◽  
Radu Tanasescu ◽  
CS Constantinescu ◽  
DP Auer ◽  
WJ Cottam

AbstractMeta-analysis of published neuroimaging results is commonly performed using coordinate based meta-analysis (CBMA). Most commonly CBMA algorithms detect spatial clustering of reported coordinates across multiple studies by assuming that results relating to the common hypothesis fall in similar anatomical locations. The null hypothesis is that studies report uncorrelated results, which is simulated by random coordinates. It is assumed that multiple clusters are independent yet it is likely that multiple results reported per study are not, and in fact represent a network effect. Here the multiple reported effect sizes (reported peak Z scores) are assumed multivariate normal, and maximum likelihood used to estimate the parameters of the covariance matrix. The hypothesis is that the effect sizes are correlated. The parameters are covariance of effect size, considered as edges of a network, while clusters are considered as nodes. In this way coordinate based meta-analysis of networks (CBMAN) estimates a network of reported meta-effects, rather than multiple independent effects (clusters).CBMAN uses only the same data as CBMA, yet produces extra information in terms of the correlation between clusters. Here it is validated on numerically simulated data, and demonstrated on real data used previously to demonstrate CBMA. The CBMA and CBMAN clusters are similar, despite the very different hypothesis.

BMJ Open ◽  
2020 ◽  
Vol 10 (5) ◽  
pp. e034846 ◽  
Author(s):  
Rutger MJ de Zoete ◽  
James H McAuley ◽  
Nigel R Armfield ◽  
Michele Sterling

IntroductionNeck pain is a global burdensome problem, with a large proportion of neck pain cases becoming chronic. Although physical exercise is a commonly prescribed treatment, the evidence on the effectiveness of isolated exercise interventions remains limited. Traditional pairwise randomised controlled trials (RCTs) and meta-analyses are limited in only comparing two interventions. This protocol describes the design of a network meta-analysis, which enables a comparative investigation of all physical exercise interventions for which RCTs are available. We aim to systematically compare the effectiveness of different types of physical exercise in people with chronic non-specific neck pain.Methods and analysisNine electronic databases (AMED, CINAHL, Cochrane Central Register of Controlled Trials, Embase, MEDLINE, Physiotherapy Evidence Database, PsycINFO, Scopus and SPORTDiscus) were searched for RCTs from inception to 12 March 2019. Titles and abstract firstly, and full-text papers secondly, will be screened by two reviewers. Data will be extracted by two reviewers. The primary outcome measure is effectiveness of the intervention. Methodological quality of included studies will be assessed by two reviewers using the PEDro scale. The overall quality of evidence will be assessed with the Grading of Recommendations Assessment, Development and Evaluation (GRADE) framework, which has been adapted for network meta-analyses. The available evidence will be summarised using a network diagram. A contribution matrix will be presented to allow assessment of direct and indirect evidence. Forest plots will be constructed to visualise effects of all included exercise interventions. Pairwise effect sizes will be calculated by including all evidence available in the network. Effect measures for treatments that have not been compared in a pairwise RCT can be compared indirectly by contrasting effect sizes of comparisons with a common comparator.Ethics and disseminationThis work synthesises evidence from previously published studies and does not require ethics review or approval. A manuscript describing the findings will be submitted for publication in a peer-reviewed scientific journal.PROSPERO registration numberCRD42019126523.


2019 ◽  
Author(s):  
Mike W.-L. Cheung

Conventional meta-analytic procedures assume that effect sizes are independent. When effect sizes are non-independent, conclusions based on these conventional models can be misleading or even wrong. Traditional approaches, such as averaging the effect sizes and selecting one effect size per study, are usually used to remove the dependence of the effect sizes. These ad-hoc approaches, however, may lead to missed opportunities to utilize all available data to address the relevant research questions. Both multivariate meta-analysis and three-level meta-analysis have been proposed to handle non-independent effect sizes. This paper gives a brief introduction to these new techniques for applied researchers. The first objective is to highlight the benefits of using these methods to address non-independent effect sizes. The second objective is to illustrate how to apply these techniques with real data in R and Mplus. Researchers may modify the sample R and Mplus code to fit their data.


Author(s):  
Miguel-Angel Negrín-Hernández ◽  
María Martel-Escobar ◽  
Francisco-José Vázquez-Polo

In meta-analysis, the structure of the between-sample heterogeneity plays a crucial role in estimating the meta-parameter. A Bayesian meta-analysis for binary data has recently been proposed that measures this heterogeneity by clustering the samples and then determining the posterior probability of the cluster models through model selection. The meta-parameter is then estimated using Bayesian model averaging techniques. Although an objective Bayesian meta-analysis is proposed for each type of heterogeneity, we concentrate the attention of this paper on priors over the models. We consider four alternative priors which are motivated by reasonable but different assumptions. A frequentist validation with simulated data has been carried out to analyze the properties of each prior distribution for a set of different number of studies and sample sizes. The results show the importance of choosing an adequate model prior as the posterior probabilities for the models are very sensitive to it. The hierarchical Poisson prior and the hierarchical uniform prior show a good performance when the real model is the homogeneity, or when the sample sizes are high enough. However, the uniform prior can detect the true model when it is an intermediate model (neither homogeneity nor heterogeneity) even for small sample sizes and few studies. An illustrative example with real data is also given, showing the sensitivity of the estimation of the meta-parameter to the model prior.


2020 ◽  
Author(s):  
CR Tench ◽  
R Tanasescu ◽  
CS Constantinescu ◽  
DP Auer ◽  
WJ Cottam

AbstractMeta-analysis of published neuroimaging studies testing a common hypothesis is most often performed using coordinate based meta-analysis (CBMA). The locations of spatial clusters of reported coordinates are considered relevant to the hypothesis because multiple studies have reported effects in the same anatomical vicinity. Many algorithms have been implemented, and a common feature is the use of some empirical assumptions that may not be generalisable. Some algorithms require numerical randomisation of coordinates uniformly in an image space to define a statistical threshold, but there is no consensus about how to define the space. Most algorithms also require a smoothing kernel to extrapolate the reported foci to voxel-wise results, but again there is no consensus. Some algorithms utilise the reported statistical effect sizes (Z scores, t statistics, p-values) and require assumptions about their distribution. Beyond these issues thresholding of results, which is necessitated by the potential for false positive results in neuroimaging studies, is performed using a multitude of methods. Whatever the results of these algorithms, interpretation is always conditional on the validity of the assumptions employed. Coordinate density analysis (CDA), detailed here, is new method that aims to perform the analysis with minimal, or easy to interpret, assumptions.CDA uses only the same data as other CBMA algorithms but uses a model-based assessment of coordinate statistical significance that requires only a characteristic volume, for example the human grey matter (GM) volume, and does not require any randomisation. There is also no requirement for an empirical smoothing kernel parameter. Here it is validated by numerical simulation and demonstrated on real data used previously to demonstrate CBMA.


2021 ◽  
Author(s):  
Hilde Elisabeth Maria Augusteijn ◽  
Robbie Cornelis Maria van Aert ◽  
Marcel A. L. M. van Assen

Publication bias remains to be a great challenge when conducting a meta-analysis. It may result in overestimated effect sizes, increased frequency of false positives, and over- or underestimation of the effect size heterogeneity parameter. A new method is introduced, Bayesian Meta-Analytic Snapshot (BMAS), which evaluates both effect size and its heterogeneity and corrects for potential publication bias. It evaluates the probability of the true effect size being zero, small, medium or large, and the probability of true heterogeneity being zero, small, medium or large. This approach, which provides an intuitive evaluation of uncertainty in the evaluation of effect size and heterogeneity, is illustrated with a real-data example, a simulation study, and a Shiny web application of BMAS.


2016 ◽  
Author(s):  
CR Tench ◽  
Radu Tanasescu ◽  
WJ Cottam ◽  
CS Constantinescu ◽  
DP Auer

1AbstractLow power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta‐ analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density.Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is vital to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely.


2022 ◽  
Author(s):  
Bo Wang ◽  
Andy Law ◽  
Tim Regan ◽  
Nicholas Parkinson ◽  
Joby Cole ◽  
...  

A common experimental output in biomedical science is a list of genes implicated in a given biological process or disease. The results of a group of studies answering the same, or similar, questions can be combined by meta-analysis to find a consensus or a more reliable answer. Ranking aggregation methods can be used to combine gene lists from various sources in meta-analyses. Evaluating a ranking aggregation method on a specific type of dataset before using it is required to support the reliability of the result since the property of a dataset can influence the performance of an algorithm. Evaluation of aggregation methods is usually based on a simulated database especially for the algorithms designed for gene lists because of the lack of a known truth for real data. However, simulated datasets tend to be too small compared to experimental data and neglect key features, including heterogeneity of quality, relevance and the inclusion of unranked lists. In this study, a group of existing methods and their variations which are suitable for meta-analysis of gene lists are compared using simulated and real data. Simulated data was used to explore the performance of the aggregation methods as a function of emulating the common scenarios of real genomics data, with various heterogeneity of quality, noise level, and a mix of unranked and ranked data using 20000 possible entities. In addition to the evaluation with simulated data, a comparison using real genomic data on the SARS-CoV-2 virus, cancer (NSCLC), and bacteria (macrophage apoptosis) was performed. We summarise our evaluation results in terms of a simple flowchart to select a ranking aggregation method for genomics data.


2019 ◽  
Author(s):  
Shinichi Nakagawa ◽  
Malgorzata Lagisz ◽  
Rose E O'Dea ◽  
Joanna Rutkowska ◽  
Yefeng Yang ◽  
...  

‘Classic’ forest plots show the effect sizes from individual studies and the aggregate effect from a meta-analysis. However, in ecology and evolution meta-analyses routinely contain over 100 effect sizes, making the classic forest plot of limited use. We surveyed 102 meta-analyses in ecology and evolution, finding that only 11% use the classic forest plot. Instead, most used a ‘forest-like plot’, showing point estimates (with 95% confidence intervals; CIs) from a series of subgroups or categories in a meta-regression. We propose a modification of the forest-like plot, which we name the ‘orchard plot’. Orchard plots, in addition to showing overall mean effects and CIs from meta-analyses/regressions, also includes 95% prediction intervals (PIs), and the individual effect sizes scaled by their precision. The PI allows the user and reader to see the range in which an effect size from a future study may be expected to fall. The PI, therefore, provides an intuitive interpretation of any heterogeneity in the data. Supplementing the PI, the inclusion of underlying effect sizes also allows the user to see any influential or outlying effect sizes. We showcase the orchard plot with example datasets from ecology and evolution, using the R package, orchard, including several functions for visualizing meta-analytic data using forest-plot derivatives. We consider the orchard plot as a variant on the classic forest plot, cultivated to the needs of meta-analysts in ecology and evolution. Hopefully, the orchard plot will prove fruitful for visualizing large collections of heterogeneous effect sizes regardless of the field of study.


2019 ◽  
Author(s):  
Amanda Kvarven ◽  
Eirik Strømland ◽  
Magnus Johannesson

Andrews & Kasy (2019) propose an approach for adjusting effect sizes in meta-analysis for publication bias. We use the Andrews-Kasy estimator to adjust the result of 15 meta-analyses and compare the adjusted results to 15 large-scale multiple labs replication studies estimating the same effects. The pre-registered replications provide precisely estimated effect sizes, which do not suffer from publication bias. The Andrews-Kasy approach leads to a moderate reduction of the inflated effect sizes in the meta-analyses. However, the approach still overestimates effect sizes by a factor of about two or more and has an estimated false positive rate of between 57% and 100%.


Sign in / Sign up

Export Citation Format

Share Document