scholarly journals Do alternative methods for analysing count data produce similar estimates? Implications for meta-analyses

2015 ◽  
Vol 4 (1) ◽  
Author(s):  
Peter Herbison ◽  
M. Clare Robertson ◽  
Joanne E. McKenzie
2021 ◽  
pp. 263208432199622
Author(s):  
Tim Mathes ◽  
Oliver Kuss

Background Meta-analysis of systematically reviewed studies on interventions is the cornerstone of evidence based medicine. In the following, we will introduce the common-beta beta-binomial (BB) model for meta-analysis with binary outcomes and elucidate its equivalence to panel count data models. Methods We present a variation of the standard “common-rho” BB (BBST model) for meta-analysis, namely a “common-beta” BB model. This model has an interesting connection to fixed-effect negative binomial regression models (FE-NegBin) for panel count data. Using this equivalence, it is possible to estimate an extension of the FE-NegBin with an additional multiplicative overdispersion term (RE-NegBin), while preserving a closed form likelihood. An advantage due to the connection to econometric models is, that the models can be easily implemented because “standard” statistical software for panel count data can be used. We illustrate the methods with two real-world example datasets. Furthermore, we show the results of a small-scale simulation study that compares the new models to the BBST. The input parameters of the simulation were informed by actually performed meta-analysis. Results In both example data sets, the NegBin, in particular the RE-NegBin showed a smaller effect and had narrower 95%-confidence intervals. In our simulation study, median bias was negligible for all methods, but the upper quartile for median bias suggested that BBST is most affected by positive bias. Regarding coverage probability, BBST and the RE-NegBin model outperformed the FE-NegBin model. Conclusion For meta-analyses with binary outcomes, the considered common-beta BB models may be valuable extensions to the family of BB models.


2011 ◽  
Vol 25 (3) ◽  
pp. 191-209 ◽  
Author(s):  
Maria C. Katapodi ◽  
Laurel L. Northouse

The increased demand for evidence-based health care practices calls for comparative effectiveness research (CER), namely the generation and synthesis of research evidence to compare the benefits and harms of alternative methods of care. A significant contribution of CER is the systematic identification and synthesis of available research studies on a specific topic. The purpose of this article is to provide an overview of methodological issues pertaining to systematic reviews and meta-analyses to be used by investigators with the purpose of conducting CER. A systematic review or meta-analysis is guided by a research protocol, which includes (a) the research question, (b) inclusion and exclusion criteria with respect to the target population and studies, © guidelines for obtaining relevant studies, (d) methods for data extraction and coding, (e) methods for data synthesis, and (f ) guidelines for reporting results and assessing for bias. This article presents an algorithm for generating evidence-based knowledge by systematically identifying, retrieving, and synthesizing large bodies of research studies. Recommendations for evaluating the strength of evidence, interpreting findings, and discussing clinical applicability are offered.


2020 ◽  
Vol 37 (2) ◽  
pp. 105-113 ◽  
Author(s):  
Jennifer Buckingham

AbstractThis article is a rejoinder to J.S. Bowers (2020), ‘Reconsidering the evidence that systematic phonics is more effective than alternative methods of reading instruction’, Educational Psychology Review (https://doi.org/10.1007/s10648-019-09515-y). There is strong agreement among reading scientists that learning the phonological connections between speech and print is an essential element of early reading acquisition. Meta-analyses of reading research have consistently found that methods of reading instruction that include systematic phonics instruction are more effective than methods that do not. This article critiques a recent article by Jeffery S. Bowers that attempts to challenge the robustness of the research on systematic phonics instruction. On this basis, Bowers proposes that teachers and researchers consider using alternative methods. This article finds that even with a revisionist and conservative analysis of the research literature, the strongest available evidence shows systematic phonics instruction to be more effective than any existing alternative. While it is fair to argue that researchers should investigate new practices, it is irresponsible to suggest that classroom teachers use anything other than methods based on the best evidence to date, and that evidence favours systematic phonics.


2020 ◽  
Vol 32 (3) ◽  
pp. 681-705 ◽  
Author(s):  
Jeffrey S. Bowers

AbstractThere is a widespread consensus in the research community that reading instruction in English should first focus on teaching letter (grapheme) to sound (phoneme) correspondences rather than adopt meaning-based reading approaches such as whole language instruction. That is, initial reading instruction should emphasize systematic phonics. In this systematic review, I show that this conclusion is not justified based on (a) an exhaustive review of 12 meta-analyses that have assessed the efficacy of systematic phonics and (b) summarizing the outcomes of teaching systematic phonics in all state schools in England since 2007. The failure to obtain evidence in support of systematic phonics should not be taken as an argument in support of whole language and related methods, but rather, it highlights the need to explore alternative approaches to reading instruction.


2017 ◽  
Author(s):  
Dimitrios - Georgios Kontopoulos ◽  
Bernardo García-Carreras ◽  
Sofía Sal ◽  
Thomas P. Smith ◽  
Samraat Pawar

There is currently unprecedented interest in quantifying variation in thermal physiology among organisms in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a trait, across individuals or species, at a common temperature (temperature normalisation). An increasingly popular model for fitting thermal performance curves to data – the Sharpe-Schoolfield equation – can yield strongly inflated estimates of temperature-normalised trait values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e. when the enzyme governing the performance of the trait is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or trait performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised trait values for meta-analyses of thermal performance across species in climate change impact studies.


Author(s):  
Anas Taha ◽  
Bara Saad ◽  
Bassey Enodien ◽  
Marta Bachmann ◽  
Daniel M. Frey ◽  
...  

SARS-CoV-2 has hampered healthcare systems worldwide, but some countries have found new opportunities and methods to combat it. In this study, we focused on the rapid growth of telemedicine during the pandemic around the world. We conducted a systematic literature review of all the articles published up to the present year, 2021, by following the requirements of the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) framework. The data extracted comprised eHealth and telemedicine in surgery globally, and independently in Europe, the United States, and Switzerland. This review explicitly included fifty-nine studies. Out of all the articles included, none of them found that telemedicine causes poor outcomes in patients. Telemedicine has created a new path in the world of healthcare, revolutionizing how healthcare is delivered to patients and developing alternative methods for clinicians.


Author(s):  
Sarah Ann Rhodes ◽  
Sofia Dias ◽  
Jack Wilkinson ◽  
Sarah Cotterill

Many complex healthcare interventions aim to change the behaviour of patients or health professionals, e.g. stopping smoking or prescribing fewer antibiotics. This prompts the question of which behaviour change interventions are most effective. Synthesising evidence on the effectiveness of a particular type of behaviour change intervention can be challenging because of the high levels of heterogeneity in trial design. Here we use data from a published systematic review as a case study and compare alternative methods to address this heterogeneity. One important sources of heterogeneity is that compliance to a desired behaviour can be measured and reported in a variety of different ways. In addition, interventions designed to target behaviour can be implemented at either an individual or group level leading to trials with varying layers of clustering. To handle heterogeneous outcomes we can either convert all effect estimates to a common scale (e.g. using standardised mean differences) or have separate meta-analyses for different types of outcome measure (binary and continuous measures).To address the clustering structure, adjusted standard errors can be used with the inverse variance method, or weights can be assigned based on a consistent level of clustering, such as the number of healthcare professionals. A graphical method, the albatross plot utilises reported p-values only, and can synthesise data with both heterogeneous outcomes and clustering with minimal assumption and data manipulation. Based on these methods, we reanalysed our data in four different ways and have discussed the strengths and weaknesses of each approach.


2017 ◽  
Author(s):  
Dimitrios - Georgios Kontopoulos ◽  
Bernardo García-Carreras ◽  
Sofía Sal ◽  
Thomas P. Smith ◽  
Samraat Pawar

There is currently unprecedented interest in quantifying variation in thermal physiology among organisms in order to understand and predict the biological impacts of climate change. A key parameter in this quantification of thermal physiology is the performance or value of a trait, across individuals or species, at a common temperature (temperature normalisation). An increasingly popular model for fitting thermal performance curves to data – the Sharpe-Schoolfield equation – can yield strongly inflated estimates of temperature-normalised trait values. These deviations occur whenever a key thermodynamic assumption of the model is violated, i.e. when the enzyme governing the performance of the trait is not fully functional at the chosen reference temperature. Using data on 1,758 thermal performance curves across a wide range of species, we identify the conditions that exacerbate this inflation. We then demonstrate that these biases can compromise tests to detect metabolic cold adaptation, which requires comparison of fitness or trait performance of different species or genotypes at some fixed low temperature. Finally, we suggest alternative methods for obtaining unbiased estimates of temperature-normalised trait values for meta-analyses of thermal performance across species in climate change impact studies.


2021 ◽  
Vol 1 (2) ◽  
pp. 64-76
Author(s):  
Yuxi Zhao ◽  
Lifeng Lin

Systematic reviews and meta-analyses have been increasingly used to pool research findings from multiple studies in medical sciences. The reliability of the synthesized evidence depends highly on the methodological quality of a systematic review and meta-analysis. In recent years, several tools have been developed to guide the reporting and evidence appraisal of systematic reviews and meta-analyses, and much statistical effort has been paid to improve their methodological quality. Nevertheless, many contemporary meta-analyses continue to employ conventional statistical methods, which may be suboptimal compared with several alternative methods available in the evidence synthesis literature. Based on a recent systematic review on COVID-19 in pregnancy, this article provides an overview of select good practices for performing meta-analyses from statistical perspectives. Specifically, we suggest meta-analysts (1) providing sufficient information of included studies, (2) providing information for reproducibility of meta-analyses, (3) using appropriate terminologies, (4) double-checking presented results, (5) considering alternative estimators of between-study variance, (6) considering alternative confidence intervals, (7) reporting prediction intervals, (8) assessing small-study effects whenever possible, and (9) considering one-stage methods. We use worked examples to illustrate these good practices. Relevant statistical code is also provided. The conventional and alternative methods could produce noticeably different point and interval estimates in some meta-analyses and thus affect their conclusions. In such cases, researchers should interpret the results from conventional methods with great caution and consider using alternative methods.


2021 ◽  
Vol 7 (1) ◽  
Author(s):  
William H. Ryan ◽  
Ellen R. K. Evers ◽  
Don A. Moore

When analyzing count data (such as number of questions answered correctly), psychologists often use Poisson regressions. We show through simulations that violating the assumptions of a Poisson distribution even slightly can lead to false positive rates more than doubling, and illustrate this issue with a study that finds a clearly spurious but highly significant connection between seeing the color blue and eating fish candies. In additional simulations we test alternative methods for analyzing count-data and show that these generally do not suffer from the same inflated false positive rate, nor do they result in much higher false negatives in situations where Poisson would be appropriate.


Sign in / Sign up

Export Citation Format

Share Document