scholarly journals Coordinate Based Random Effect Size meta-analysis of neuroimaging studies

2016 ◽  
Author(s):  
CR Tench ◽  
Radu Tanasescu ◽  
WJ Cottam ◽  
CS Constantinescu ◽  
DP Auer

1AbstractLow power in neuroimaging studies can make them difficult to interpret, and Coordinate based meta‐ analysis (CBMA) may go some way to mitigating this issue. CBMA has been used in many analyses to detect where published functional MRI or voxel-based morphometry studies testing similar hypotheses report significant summary results (coordinates) consistently. Only the reported coordinates and possibly t statistics are analysed, and statistical significance of clusters is determined by coordinate density.Here a method of performing coordinate based random effect size meta-analysis and meta-regression is introduced. The algorithm (ClusterZ) analyses both coordinates and reported t statistic or Z score, standardised by the number of subjects. Statistical significance is determined not by coordinate density, but by a random effects meta-analyses of reported effects performed cluster-wise using standard statistical methods and taking account of censoring inherent in the published summary results. Type 1 error control is achieved using the false cluster discovery rate (FCDR), which is based on the false discovery rate. This controls both the family wise error rate under the null hypothesis that coordinates are randomly drawn from a standard stereotaxic space, and the proportion of significant clusters that are expected under the null. Such control is vital to avoid propagating and even amplifying the very issues motivating the meta-analysis in the first place. ClusterZ is demonstrated on both numerically simulated data and on real data from reports of grey matter loss in multiple sclerosis (MS) and syndromes suggestive of MS, and of painful stimulus in healthy controls. The software implementation is available to download and use freely.

Author(s):  
Wen-Wen Chang ◽  
Hathaichon Boonhat ◽  
Ro-Ting Lin

The air pollution emitted by petrochemical industrial complexes (PICs) may affect the respiratory health of surrounding residents. Previous meta-analyses have indicated a higher risk of lung cancer mortality and incidence among residents near a PIC. Therefore, in this study, a meta-analysis was conducted to estimate the degree to which PIC exposure increases the risk of the development of nonmalignant respiratory symptoms among residents. We followed the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) guidelines to systematically identify, select, and critically appraise relevant research. Finally, we identified 16 study groups reporting 5 types of respiratory symptoms: asthma, bronchitis, cough, rhinitis, and wheezing. We estimated pooled odds ratios (ORs) using random-effect models and investigated the robustness of pooled estimates in subgroup analyses by location, observation period, and age group. We determined that residential exposure to a PIC was significantly associated with a higher incidence of cough (OR = 1.35), wheezing (OR = 1.28), bronchitis (OR = 1.26), rhinitis (OR = 1.17), and asthma (OR = 1.15), although the latter two associations did not reach statistical significance. Subgroup analyses suggested that the association remained robust across different groups for cough and bronchitis. We identified high heterogeneity for asthma, rhinitis, and wheezing, which could be due to higher ORs in South America. Our meta-analysis indicates that residential exposure to a PIC is associated with an increased risk of nonmalignant respiratory symptoms.


1990 ◽  
Vol 24 (3) ◽  
pp. 405-415 ◽  
Author(s):  
Nathaniel McConaghy

Meta-analysis replaced statistical significance with effect size in the hope of resolving controversy concerning evaluation of treatment effects. Statistical significance measured reliability of the effect of treatment, not its efficacy. It was strongly influenced by the number of subjects investigated. Effect size as assessed originally, eliminated this influence but by standardizing the size of the treatment effect could distort it. Meta-analyses which combine the results of studies which employ different subject types, outcome measures, treatment aims, no-treatment rather than placebo controls or therapists with varying experience can be misleading. To ensure discussion of these variables meta-analyses should be used as an aid rather than a substitute for literature review. While meta-analyses produce contradictory findings, it seems unwise to rely on the conclusions of an individual analysis. Their consistent finding that placebo treatments obtain markedly higher effect sizes than no treatment hopefully will render the use of untreated control groups obsolete.


2017 ◽  
Vol 4 (2) ◽  
pp. 160254 ◽  
Author(s):  
Estelle Dumas-Mallet ◽  
Katherine S. Button ◽  
Thomas Boraud ◽  
Francois Gonon ◽  
Marcus R. Munafò

Studies with low statistical power increase the likelihood that a statistically significant finding represents a false positive result. We conducted a review of meta-analyses of studies investigating the association of biological, environmental or cognitive parameters with neurological, psychiatric and somatic diseases, excluding treatment studies, in order to estimate the average statistical power across these domains. Taking the effect size indicated by a meta-analysis as the best estimate of the likely true effect size, and assuming a threshold for declaring statistical significance of 5%, we found that approximately 50% of studies have statistical power in the 0–10% or 11–20% range, well below the minimum of 80% that is often considered conventional. Studies with low statistical power appear to be common in the biomedical sciences, at least in the specific subject areas captured by our search strategy. However, we also observe evidence that this depends in part on research methodology, with candidate gene studies showing very low average power and studies using cognitive/behavioural measures showing high average power. This warrants further investigation.


2021 ◽  
Vol 18 (1) ◽  
Author(s):  
Lawrence M. Paul

Abstract Background The use of meta-analysis to aggregate the results of multiple studies has increased dramatically over the last 40 years. For homogeneous meta-analysis, the Mantel–Haenszel technique has typically been utilized. In such meta-analyses, the effect size across the contributing studies of the meta-analysis differs only by statistical error. If homogeneity cannot be assumed or established, the most popular technique developed to date is the inverse-variance DerSimonian and Laird (DL) technique (DerSimonian and Laird, in Control Clin Trials 7(3):177–88, 1986). However, both of these techniques are based on large sample, asymptotic assumptions. At best, they are approximations especially when the number of cases observed in any cell of the corresponding contingency tables is small. Results This research develops an exact, non-parametric test for evaluating statistical significance and a related method for estimating effect size in the meta-analysis of k 2 × 2 tables for any level of heterogeneity as an alternative to the asymptotic techniques. Monte Carlo simulations show that even for large values of heterogeneity, the Enhanced Bernoulli Technique (EBT) is far superior at maintaining the pre-specified level of Type I Error than the DL technique. A fully tested implementation in the R statistical language is freely available from the author. In addition, a second related exact test for estimating the Effect Size was developed and is also freely available. Conclusions This research has developed two exact tests for the meta-analysis of dichotomous, categorical data. The EBT technique was strongly superior to the DL technique in maintaining a pre-specified level of Type I Error even at extremely high levels of heterogeneity. As shown, the DL technique demonstrated many large violations of this level. Given the various biases towards finding statistical significance prevalent in epidemiology today, a strong focus on maintaining a pre-specified level of Type I Error would seem critical. In addition, a related exact method for estimating the Effect Size was developed.


2021 ◽  
Author(s):  
Lawrence Marc Paul

Abstract BackgroundThe use of meta-analysis to aggregate the results of multiple studies has increased dramatically over the last 40 years. For homogeneous meta-analysis, the Mantel-Haenszel technique has typically been utilized. In such meta-analyses, the effect size across the contributing studies of the meta-analysis differ only by statistical error. If homogeneity cannot be assumed or established, the most popular technique developed to date is the inverse-variance DerSimonian & Laird (DL) technique [1]. However, both of these techniques are based on large sample, asymptotic assumptions. At best, they are approximations especially when the number of cases observed in any cell of the corresponding contingency tables is small.ResultsThis research develops an exact, non-parametric test for evaluating statistical significance and a related method for estimating effect size in the meta-analysis of k 2 x 2 tables for any level of heterogeneity as an alternative to the asymptotic techniques. Monte Carlo simulations show that even for large values of heterogeneity, the Enhanced Bernoulli Technique (EBT) is far superior at maintaining the pre-specified level of Type I Error than the DL technique. A fully tested implementation in the R statistical language is freely available from the author. In addition, a second related exact test for estimating the Effect Size was developed and is also freely available.ConclusionsThis research has developed two exact tests for the meta-analysis of dichotomous, categorical data. The EBT technique was strongly superior to the DL technique in maintaining a pre-specified level of Type I Error even at extremely high levels of heterogeneity. As shown, the DL technique demonstrated many large violations of this level. Given the various biases towards finding statistical significance prevalent in epidemiology today, a strong focus on maintaining a pre-specified level of Type I Error would seem critical.


2018 ◽  
Author(s):  
Michel-Pierre Coll

AbstractEmpathy has received considerable attention from the field of cognitive and social neuroscience. A significant portion of these studies used the event-related potential (ERP) technique to study the mechanisms of empathy for pain in others in different conditions and clinical populations. These show that specific ERP components measured during the observation of pain in others are modulated by several factors and altered in clinical populations. However, issues present in this literature such as analytical flexibility and lack of type 1 error control raise doubts regarding the validity and reliability of these conclusions. The current study compiled the results and methodological characteristics of 40 studies using ERP to study empathy of pain in others. The results of the meta-analysis suggest that the centro-parietal P3 and late positive potential component are sensitive to the observation of pain in others, while the early N1 and N2 components are not reliably associated with vicarious pain observation. The review of the methodological characteristics shows that the presence of selective reporting, analytical flexibility and lack of type 1 error control compromise the interpretation of these results. The implication of these results for the study of empathy and potential solutions to improve future investigations are discussed.


2022 ◽  
Author(s):  
Bo Wang ◽  
Andy Law ◽  
Tim Regan ◽  
Nicholas Parkinson ◽  
Joby Cole ◽  
...  

A common experimental output in biomedical science is a list of genes implicated in a given biological process or disease. The results of a group of studies answering the same, or similar, questions can be combined by meta-analysis to find a consensus or a more reliable answer. Ranking aggregation methods can be used to combine gene lists from various sources in meta-analyses. Evaluating a ranking aggregation method on a specific type of dataset before using it is required to support the reliability of the result since the property of a dataset can influence the performance of an algorithm. Evaluation of aggregation methods is usually based on a simulated database especially for the algorithms designed for gene lists because of the lack of a known truth for real data. However, simulated datasets tend to be too small compared to experimental data and neglect key features, including heterogeneity of quality, relevance and the inclusion of unranked lists. In this study, a group of existing methods and their variations which are suitable for meta-analysis of gene lists are compared using simulated and real data. Simulated data was used to explore the performance of the aggregation methods as a function of emulating the common scenarios of real genomics data, with various heterogeneity of quality, noise level, and a mix of unranked and ranked data using 20000 possible entities. In addition to the evaluation with simulated data, a comparison using real genomic data on the SARS-CoV-2 virus, cancer (NSCLC), and bacteria (macrophage apoptosis) was performed. We summarise our evaluation results in terms of a simple flowchart to select a ranking aggregation method for genomics data.


2021 ◽  
Author(s):  
In-Soo Shin ◽  
Chai Hong Rim

BACKGROUND The necessity of meta-analyses including observational studies has been discussed in the literature, but a synergistic analysis method combining randomised and observational studies has not been reported. OBJECTIVE This study introduces a logical method for clinical interpretation. METHODS Observational studies differ in validity depending on the degree of the confounders’ influence. Combining interpretations might be challenging, especially if the statistical directions are similar but the magnitude of the pooled results are different, between randomised and observational studies (grey zone). We designed a stepwise-hierarchical pooled analysis, a method of analysing distribution trends as well as individual pooled results by dividing included studies into at least three stages (e.g. all studies, balanced studies, and randomised studies), to overcome such hindrances. RESULTS According to the model, the validity of hypothesis are mostly based on the pooled results of randomised studies (the highest stage). In addition, ascending patterns where effect size and statistical significance increase gradually with stage, strengthen the validity of the hypothesis; in this case, the effect size of observational studies is lower than that of the true effect (e.g. because of uncontrolled effect of negative confounders). Descending patterns where decreasing effect size and statistical significance gradually weaken the validity of the hypothesis suggest that the effect size and statistical significance of observational studies is larger than the true effect (e.g. because of researchers’ bias). These are described in more detail in the main text as four descriptive patterns. CONCLUSIONS We recommend using the stepwise-hierarchical pooled analysis for meta-analyses involving randomised and observational studies. CLINICALTRIAL NA


2021 ◽  
pp. 146531252110272
Author(s):  
Despina Koletsi ◽  
Anna Iliadi ◽  
Theodore Eliades

Objective: To evaluate all available evidence on the prediction of rotational tooth movements with aligners. Data sources: Seven databases of published and unpublished literature were searched up to 4 August 2020 for eligible studies. Data selection: Studies were deemed eligible if they included evaluation of rotational tooth movement with any type of aligner, through the comparison of software-based and actually achieved data after patient treatment. Data extraction and data synthesis: Data extraction was done independently and in duplicate and risk of bias assessment was performed with the use of the QUADAS-2 tool. Random effects meta-analyses with effect sizes and their 95% confidence intervals (CIs) were performed and the quality of the evidence was assessed through GRADE. Results: Seven articles were included in the qualitative synthesis, of which three contributed to meta-analyses. Overall results revealed a non-accurate prediction of the outcome for the software-based data, irrespective of the use of attachments or interproximal enamel reduction (IPR). Maxillary canines demonstrated the lowest percentage accuracy for rotational tooth movement (three studies: effect size = 47.9%; 95% CI = 27.2–69.5; P < 0.001), although high levels of heterogeneity were identified (I2: 86.9%; P < 0.001). Contrary, mandibular incisors presented the highest percentage accuracy for predicted rotational movement (two studies: effect size = 70.7%; 95% CI = 58.9–82.5; P < 0.001; I2: 0.0%; P = 0.48). Risk of bias was unclear to low overall, while quality of the evidence ranged from low to moderate. Conclusion: Allowing for all identified caveats, prediction of rotational tooth movements with aligner treatment does not appear accurate, especially for canines. Careful selection of patients and malocclusions for aligner treatment decisions remain challenging.


Sign in / Sign up

Export Citation Format

Share Document