scholarly journals Comparison of Estimation Methods of Mean and Standard Deviation in Meta-analysis in Case of Reporting Median, Range, and/or Quartiles

2021 ◽  
Vol 13 (2) ◽  
pp. 201-213
Author(s):  
Esra KÖZLEME BEKDEMİR ◽  
Esin AVCI
2020 ◽  
Vol 28 (1) ◽  
pp. 181-195
Author(s):  
Quentin Vanhaelen

: Computational approaches have been proven to be complementary tools of interest in identifying potential candidates for drug repurposing. However, although the methods developed so far offer interesting opportunities and could contribute to solving issues faced by the pharmaceutical sector, they also come with their constraints. Indeed, specific challenges ranging from data access, standardization and integration to the implementation of reliable and coherent validation methods must be addressed to allow systematic use at a larger scale. In this mini-review, we cover computational tools recently developed for addressing some of these challenges. This includes specific databases providing accessibility to a large set of curated data with standardized annotations, web-based tools integrating flexible user interfaces to perform fast computational repurposing experiments and standardized datasets specifically annotated and balanced for validating new computational drug repurposing methods. Interestingly, these new databases combined with the increasing number of information about the outcomes of drug repurposing studies can be used to perform a meta-analysis to identify key properties associated with successful drug repurposing cases. This information could further be used to design estimation methods to compute a priori assessment of the repurposing possibilities.


2021 ◽  
Author(s):  
Deepanshu Sharma ◽  
Surya Priya Ulaganathan ◽  
Vinay Sharma ◽  
Sakshi Piplani ◽  
Ravi Ranjan Kumar Niraj

Abstract Background and objectivesMeta-analysis is a statistical procedure which enables the researcher to integrate the results of various studies that were conducted for the same purpose. However, more often than not, researchers find themselves in a position unable to proceed further due to the complexity of the mathematics involved and unavailability of raw data. To alleviate the said difficulty, we are presenting a tool that will enable researchers to process raw data.MethodsThe GUI tool is written in python. The tool offers an automated conversion and obtainment of mean and standard deviation (SD) from median and interquartile range, utilizing the methods offered by Hozo et al. 2005 and Bland 2015.ResultsThe tool is tested on some sample data and validation is performed for Bland method on the data provided in the Bland method publication (14).ConclusionsThe provided tool is an easy alternative for the preparation of input data required for clinical meta-analysis in the required format.


1998 ◽  
Vol 21 (3) ◽  
pp. 338-339
Author(s):  
Douglas Wahlsten ◽  
Katherine M. Bishop

Sex dimorphism occurs when group means differ by four or more standard deviations. However, the average size of the corpus callosum is greater in males by about one standard deviation in rats, 0.2 standard deviation in humans, and virtually zero in mice. Furthermore, variations in corpus callosum size are related to brain size and are not sex specific.


2020 ◽  
Author(s):  
Leah Palapar ◽  
Ngaire Kerse ◽  
Anna Rolleston ◽  
Wendy P J den Elzen ◽  
Jacobijn Gussekloo ◽  
...  

Abstract Objective To determine the physical and mental health of very old people (aged 80+) with anaemia. Methods Individual level meta-analysis from five cohorts of octogenarians (n = 2,392): LiLACS NZ Māori, LiLACS NZ non-Māori, Leiden 85-plus Study, Newcastle 85+ Study, and TOOTH. Mixed models of change in functional ability, cognitive function, depressive symptoms, and self-rated health over time were separately fitted for each cohort. We combined individual cohort estimates of differences according to the presence of anaemia at baseline, adjusting for age at entry, sex, and time elapsed. Combined estimates are presented as differences in standard deviation units (i.e. standardised mean differences–SMDs). Results The combined prevalence of anaemia was 30.2%. Throughout follow-up, participants with anaemia, on average, had: worse functional ability (SMD −0.42 of a standard deviation across cohorts; CI -0.59,-0.25); worse cognitive scores (SMD -0.27; CI -0.39,-0.15); worse depression scores (SMD -0.20; CI -0.31,-0.08); and lower ratings of their own health (SMD -0.36; CI -0.47,-0.25). Differential rates of change observed were: larger declines in functional ability for those with anaemia (SMD −0.12 over five years; CI -0.21,-0.03) and smaller mean difference in depression scores over time between those with and without anaemia (SMD 0.18 over five years; CI 0.05,0.30). Conclusion Anaemia in the very old is a common condition associated with worse functional ability, cognitive function, depressive symptoms, and self-rated health, and a more rapid decline in functional ability over time. The question remains as to whether anaemia itself contributes to worse outcomes or is simply a marker of chronic diseases and nutrient deficiencies.


Author(s):  
Pingping Liao ◽  
Maolin Cai ◽  
Xiangheng Fu

Air leakage is one of the most significant energy waste factors in compressed air systems which account for about 10% of total industrial energy consumption. It is estimated that about 10%∼40% of the compressed air is wasted through leakage in most plants. A new ultrasonic leak detection method based on time delay estimation (TDE) is proposed to locate the compressed air leak for preventing energy waste in pneumatic systems. The accuracy of detection is highly dependent on the performance of the TDE method. Performances of six typical TDE methods based on generalized cross correlation (GCC) are compared, and these methods are the basic cross correlation (BCC), the Roth impulse response, the phase transform (PHAT), the smoothed coherence transform (SCOT), the WEINER processor, and the Hannan-Thomson (HT) processor. The experimental results show that: Firstly, the accuracy and precision of time delay estimation increases with the observation interval for all these methods. Secondly, the success rates of Roth, PHAT, SCOT and HT are much higher than that of BCC and WEINER, among which the HT processor performs best with a highest success rates closely followed by the PHAT processor. Thirdly, the HT processor which is a maximum likelihood estimator gives the minimum standard deviation of the time delay estimate; however, the standard deviations of all these GCC methods are very small. The HT processor outperforms other GCC methods in terms of success rate and standard deviation. Consequently, it is preferable to apply the HT processor for this particular purpose.


2020 ◽  
Author(s):  
James E Pustejovsky ◽  
Elizabeth Tipton

In prevention science and related fields, large meta-analyses are common, and these analyses often involve dependent effect size estimates. Robust variance estimation (RVE) methods provide a way to include all dependent effect sizes in a single meta-regression model, even when the nature of the dependence is unknown. RVE uses a working model of the dependence structure, but the two currently available working models are limited to each describing a single type of dependence. Drawing on flexible tools from multivariate meta-analysis, this paper describes an expanded range of working models, along with accompanying estimation methods, which offer benefits in terms of better capturing the types of data structures that occur in practice and improving the efficiency of meta-regression estimates. We describe how the methods can be implemented using existing software (the ‘metafor’ and ‘clubSandwich’ packages for R) and illustrate the approach in a meta-analysis of randomized trials examining the effects of brief alcohol interventions for adolescents and young adults.


2022 ◽  
Vol 14 (1) ◽  
Author(s):  
Xiaoli Ren ◽  
Zhiyun Wang ◽  
Congfang Guo

Abstract Objectives Long-term glycemic variability has been related to increased risk of vascular complication in patients with diabetes. However, the association between parameters of long-term glycemic variability and risk of stroke remains not fully determined. We performed a meta-analysis to systematically evaluate the above association. Methods Medline, Embase, and Web of Science databases were searched for longitudinal follow-up studies comparing the incidence of stroke in diabetic patients with higher or lower long-term glycemic variability. A random-effect model incorporating the potential heterogeneity among the included studies were used to pool the results. Results Seven follow-up studies with 725,784 diabetic patients were included, and 98% of them were with type 2 diabetes mellitus (T2DM). The mean follow-up duration was 7.7 years. Pooled results showed that compared to those with lowest category of glycemic variability, diabetic patients with the highest patients had significantly increased risk of stroke, as evidenced by glycemic variability analyzed by fasting plasma glucose coefficient of variation (FPG-CV: risk ratio [RR] = 1.24, 95% confidence interval [CI] 1.11 to 1.39, P < 0.001; I2 = 53%), standard deviation of FPG (FPG-SD: RR = 1.16, 95% CI 1.02 to 1.31, P = 0.02; I2 = 74%), HbA1c coefficient of variation (HbA1c-CV: RR = 1.88, 95% CI 1.61 to 2.19 P < 0.001; I2 = 0%), and standard deviation of HbA1c (HbA1c-SD: RR = 1.73, 95% CI 1.49 to 2.00, P < 0.001; I2 = 0%). Conclusions Long-term glycemic variability is associated with higher risk of stroke in T2DM patients.


Author(s):  
Hope VonBorkenhagen ◽  
Mark L. Lengnick-Hall

A meta-analysis of the effects of II types of psychologically based organizational interventions on worker productivity showed that such programs, on average, raised worker productivity by nearly one-half standard deviation. This study reviewed research reported between 1982 and 1996 and extends a previous study by Guzzo, Jette and Katzell (1985) which reviewed research reported between 1971 and 1981. While the same overall effect size for productivity improvement programs was discovered, differences were found between the two studies on the effectiveness of specific interventions.


2017 ◽  
Vol 10 (3) ◽  
pp. 488-495 ◽  
Author(s):  
Frank L. Schmidt ◽  
Chockalingam Viswesvaran ◽  
Deniz S. Ones ◽  
Huy Le

The lengthy and complex focal article by Tett, Hundley, and Christiansen (2017) is based on a fundamental misunderstanding of the nature of validity generalization (VG): It is based on the assumption that what is generalized in VG is the estimated value of mean rho ($\bar{\rho}$). This erroneous assumption is stated repeatedly throughout the article. A conclusion of validity generalization does not imply that $\bar{\rho}$ is identical across all situations. If VG is present, most, if not all, validities in the validity distribution are positive and useful even if there is some variation in that distribution. What is generalized is the entire distribution of rho ($\bar{\rho}$), not just the estimated $\bar{\rho}$ or any other specific value of validity included in the distribution. This distribution is described by its mean ($\bar{\rho}$) and standard deviation (SDρ). A helpful concept based on these parameters (assuming ρ is normally distributed) is the credibility interval, which reflects the range where most of the values of ρ can be found. The lower end of the 80% credibility interval (the 90% credibility value, CV = $\bar{\rho}$ – 1.28 × SDρ) is used to facilitate understanding of this distribution by indicating the statistical “worst case” for validity, for practitioners using VG. Validity has an estimated 90% chance of lying above this value. This concept has long been recognized in the literature (see Hunter & Hunter, 1984, for an example; see also Schmidt, Law, Hunter, Rothstein, Pearlman, & McDaniel, 1993, and hundreds of VG articles that have appeared in the literature over the past 40 years since the invention of psychometric meta-analysis as a means of examining VG [Schmidt & Hunter, 1977]). The $\bar{\rho}$ is the value in the distribution with the highest likelihood of occurring (although often by only a small amount), but it is the whole distribution that is generalized. Tett et al. (2017) state that some meta-analysis articles claim that they are generalizing only $\bar{\rho}$. If true, this is inappropriate. Because $\bar{\rho}$ has the highest likelihood in the ρ distribution, discussion often focuses on that value as a matter of convenience, but $\bar{\rho}$ is not what is generalized in VG. What is generalized is the conclusion that there is validity throughout the credibility interval. The false assumption that it is $\bar{\rho}$ and not the ρ distribution as a whole that is generalized in VG is the basis for the Tett et al. article and is its Achilles heel. In this commentary, we examine the target article's basic arguments and point out errors and omissions that led Tett et al. to falsely conclude that VG is a “myth.”


Sign in / Sign up

Export Citation Format

Share Document