scholarly journals The Use of Confidence Intervals in Reporting Orthopaedic Research Findings

2009 ◽  
Vol 467 (12) ◽  
pp. 3334-3339 ◽  
Author(s):  
Patrick Vavken ◽  
Klemens M. Heinrich ◽  
Christian Koppelhuber ◽  
Stefan Rois ◽  
Ronald Dorotka
Author(s):  
Janet L. Peacock ◽  
Philip J. Peacock

Communicating statistics 128 Producing journal articles 130 Research articles: abstracts 132 Research articles: introduction and methods sections 134 Research articles: results section 136 Research articles: discussion section 138 Presenting statistics: managing computer output 140 Presenting statistics: numerical results 142 Presenting statistics: P values and confidence intervals ...


1994 ◽  
Vol 33 (02) ◽  
pp. 214-219 ◽  
Author(s):  
J. Izsák

Abstract:The sample theory of normal diversity indices is complex. Distributionfree methods, such as the jackknife method, can easily be used to determine confidence intervals and testing diversity. Jackknife estimates and their variances for a number of different diversity indices are described in this paper. A simple numerical example is given for demonstrating this method. Discrimination based on confidence intervals is also discussed. It is assumed that there is a special correlation between the sensitivity parameter m and the relative width of confidence intervals in the Hurlbert index family. It is shown that the usual estimation of the Hurlbert index coincides with the relating jackknife estimate. For demonstration, diagnoses registered in a set of death certificates are used. There is a considerable diversity in diagnoses among different diagnostic groups: the diversity is largest in autopsy reports, whereas it is non-significant in GP’s reports and in reports of physicians authorized to issue death certificates. Knowing that autopsy reports tend to be fairly accurate, our research findings seem to confirm the hypothesis that there is a correlation between reliability and diversity of diagnoses.


2019 ◽  
Vol 181 (3) ◽  
pp. E1-E2 ◽  
Author(s):  
Olaf M Dekkers

P values should not merely be used to categorize results into significant and non-significant. This practice disregards clinical relevance, confounds non-significance with no effect and underestimates the likelihood of false-positive results. Better than to use the P value as a dichotomizing instrument, the P values and the confidence intervals around effect estimates can be used to put research findings in a context, thereby taking clinical relevance but also uncertainty genuinely into account.


Author(s):  
Scott B. Morris ◽  
Arash Shokri

To understand and communicate research findings, it is important for researchers to consider two types of information provided by research results: the magnitude of the effect and the degree of uncertainty in the outcome. Statistical significance tests have long served as the mainstream method for statistical inferences. However, the widespread misinterpretation and misuse of significance tests has led critics to question their usefulness in evaluating research findings and to raise concerns about the far-reaching effects of this practice on scientific progress. An alternative approach involves reporting and interpreting measures of effect size along with confidence intervals. An effect size is an indicator of magnitude and direction of a statistical observation. Effect size statistics have been developed to represent a wide range of research questions, including indicators of the mean difference between groups, the relative odds of an event, or the degree of correlation among variables. Effect sizes play a key role in evaluating practical significance, conducting power analysis, and conducting meta-analysis. While effect sizes summarize the magnitude of an effect, the confidence intervals represent the degree of uncertainty in the result. By presenting a range of plausible alternate values that might have occurred due to sampling error, confidence intervals provide an intuitive indicator of how strongly researchers should rely on the results from a single study.


1998 ◽  
Vol 21 (2) ◽  
pp. 203-204 ◽  
Author(s):  
Andrew F. Hayes

Chow illustrates the important role played by significance testing in the evaluation of research findings. Statistics and the goals of research should be treated as both interrelated and separate parts of the research evaluation process – a message that will benefit all who read Chow's book. The arguments are especially pertinent to the debate over the relative merits of confidence intervals and significance tests.


2020 ◽  
Vol 29 (2) ◽  
pp. 688-704
Author(s):  
Katrina Fulcher-Rood ◽  
Anny Castilla-Earls ◽  
Jeff Higginbotham

Purpose The current investigation is a follow-up from a previous study examining child language diagnostic decision making in school-based speech-language pathologists (SLPs). The purpose of this study was to examine the SLPs' perspectives regarding the use of evidence-based practice (EBP) in their clinical work. Method Semistructured phone interviews were conducted with 25 school-based SLPs who previously participated in an earlier study by Fulcher-Rood et al. 2018). SLPs were asked questions regarding their definition of EBP, the value of research evidence, contexts in which they implement scientific literature in clinical practice, and the barriers to implementing EBP. Results SLPs' definitions of EBP differed from current definitions, in that SLPs only included the use of research findings. SLPs seem to discuss EBP as it relates to treatment and not assessment. Reported barriers to EBP implementation were insufficient time, limited funding, and restrictions from their employment setting. SLPs found it difficult to translate research findings to clinical practice. SLPs implemented external research evidence when they did not have enough clinical expertise regarding a specific client or when they needed scientific evidence to support a strategy they used. Conclusions SLPs appear to use EBP for specific reasons and not for every clinical decision they make. In addition, SLPs rely on EBP for treatment decisions and not for assessment decisions. Educational systems potentially present other challenges that need to be considered for EBP implementation. Considerations for implementation science and the research-to-practice gap are discussed.


Sign in / Sign up

Export Citation Format

Share Document