scholarly journals Validation of an Evidence-Based Medicine Critically Appraised Topic Presentation Evaluation Tool (EBM C-PET)

2013 ◽  
Vol 5 (2) ◽  
pp. 252-256 ◽  
Author(s):  
Hans B. Kersten ◽  
John G. Frohna ◽  
Erin L. Giudice

Abstract Background Competence in evidence-based medicine (EBM) is an important clinical skill. Pediatrics residents are expected to acquire competence in EBM during their education, yet few validated tools exist to assess residents' EBM skills. Objective We sought to develop a reliable tool to evaluate residents' EBM skills in the critical appraisal of a research article, the development of a written EBM critically appraised topic (CAT) synopsis, and a presentation of the findings to colleagues. Methods Instrument development used a modified Delphi technique. We defined the skills to be assessed while reviewing (1) a written CAT synopsis and (2) a resident's EBM presentation. We defined skill levels for each item using the Dreyfus and Dreyfus model of skill development and created behavioral anchors using a frame-of-reference training technique to describe performance for each skill level. We evaluated the assessment instrument's psychometric properties, including internal consistency and interrater reliability. Results The EBM Critically Appraised Topic Presentation Evaluation Tool (EBM C-PET) is composed of 14 items that assess residents' EBM and global presentation skills. Resident presentations (N  =  27) and the corresponding written CAT synopses were evaluated using the EBM C-PET. The EBM C-PET had excellent internal consistency (Cronbach α  =  0.94). Intraclass correlation coefficients were used to assess interrater reliability. Intraclass correlation coefficients for individual items ranged from 0.31 to 0.74; the average intraclass correlation coefficients for the 14 items was 0.67. Conclusions We identified essential components of an assessment tool for an EBM CAT synopsis and presentation with excellent internal consistency and a good level of interrater reliability across 3 different institutions. The EBM C-PET is a reliable tool to document resident competence in higher-level EBM skills.

1991 ◽  
Vol 34 (5) ◽  
pp. 989-999 ◽  
Author(s):  
Stephanie Shaw ◽  
Truman E. Coggins

This study examines whether observers reliably categorize selected speech production behaviors in hearing-impaired children. A group of experienced speech-language pathologists was trained to score the elicited imitations of 5 profoundly and 5 severely hearing-impaired subjects using the Phonetic Level Evaluation (Ling, 1976). Interrater reliability was calculated using intraclass correlation coefficients. Overall, the magnitude of the coefficients was found to be considerably below what would be accepted in published behavioral research. Failure to obtain acceptably high levels of reliability suggests that the Phonetic Level Evaluation may not yet be an accurate and objective speech assessment measure for hearing-impaired children.


2018 ◽  
Vol 25 (3) ◽  
pp. 286-290 ◽  
Author(s):  
Elif Bilgic ◽  
Madoka Takao ◽  
Pepa Kaneva ◽  
Satoshi Endo ◽  
Toshitatsu Takao ◽  
...  

Background. Needs assessment identified a gap regarding laparoscopic suturing skills targeted in simulation. This study collected validity evidence for an advanced laparoscopic suturing task using an Endo StitchTM device. Methods. Experienced (ES) and novice surgeons (NS) performed continuous suturing after watching an instructional video. Scores were based on time and accuracy, and Global Operative Assessment of Laparoscopic Surgery. Data are shown as medians [25th-75th percentiles] (ES vs NS). Interrater reliability was calculated using intraclass correlation coefficients (confidence interval). Results. Seventeen participants were enrolled. Experienced surgeons had significantly greater task (980 [964-999] vs 666 [391-711], P = .0035) and Global Operative Assessment of Laparoscopic Surgery scores (25 [24-25] vs 14 [12-17], P = .0029). Interrater reliability for time and accuracy were 1.0 and 0.9 (0.74-0.96), respectively. All experienced surgeons agreed that the task was relevant to practice. Conclusion. This study provides validity evidence for the task as a measure of laparoscopic suturing skill using an automated suturing device. It could help trainees acquire the skills they need to better prepare for clinical learning.


1997 ◽  
Vol 17 (4) ◽  
pp. 280-287 ◽  
Author(s):  
Margaret Wallen ◽  
Mary-Ann Bonney ◽  
Lyn Lennox

The Handwriting Speed Test (HST), a standardized, norm-referenced test, was developed to provide an objective evaluation of the handwriting speed of school students from approximately 8 to 18 years of age. Part of the test development involved an examination of interrater reliability. Two raters scored 165 (13%) of the total 1292 handwriting samples. Using intraclass correlation coefficients, the interrater reliability was found to be excellent (ICC=1.00, P<0.0001). The process of examining interrater reliability resulted in modification to the scoring criteria of the test. Excellent interrater reliability provides support for the HST as a valuable clinical and research tool.


2019 ◽  
Vol 5 (2) ◽  
pp. 294-323 ◽  
Author(s):  
Charles Nagle

Abstract Researchers have increasingly turned to Amazon Mechanical Turk (AMT) to crowdsource speech data, predominantly in English. Although AMT and similar platforms are well positioned to enhance the state of the art in L2 research, it is unclear if crowdsourced L2 speech ratings are reliable, particularly in languages other than English. The present study describes the development and deployment of an AMT task to crowdsource comprehensibility, fluency, and accentedness ratings for L2 Spanish speech samples. Fifty-four AMT workers who were native Spanish speakers from 11 countries participated in the ratings. Intraclass correlation coefficients were used to estimate group-level interrater reliability, and Rasch analyses were undertaken to examine individual differences in rater severity and fit. Excellent reliability was observed for the comprehensibility and fluency ratings, but indices were slightly lower for accentedness, leading to recommendations to improve the task for future data collection.


2002 ◽  
Vol 96 (5) ◽  
pp. 1129-1139 ◽  
Author(s):  
Jason Slagle ◽  
Matthew B. Weinger ◽  
My-Than T. Dinh ◽  
Vanessa V. Brumer ◽  
Kevin Williams

Background Task analysis may be useful for assessing how anesthesiologists alter their behavior in response to different clinical situations. In this study, the authors examined the intraobserver and interobserver reliability of an established task analysis methodology. Methods During 20 routine anesthetic procedures, a trained observer sat in the operating room and categorized in real-time the anesthetist's activities into 38 task categories. Two weeks later, the same observer performed task analysis from videotapes obtained intraoperatively. A different observer performed task analysis from the videotapes on two separate occasions. Data were analyzed for percent of time spent on each task category, average task duration, and number of task occurrences. Rater reliability and agreement were assessed using intraclass correlation coefficients. Results Intrarater reliability was generally good for categorization of percent time on task and task occurrence (mean intraclass correlation coefficients of 0.84-0.97). There was a comparably high concordance between real-time and video analyses. Interrater reliability was generally good for percent time and task occurrence measurements. However, the interrater reliability of the task duration metric was unsatisfactory, primarily because of the technique used to capture multitasking. Conclusions A task analysis technique used in anesthesia research for several decades showed good intrarater reliability. Off-line analysis of videotapes is a viable alternative to real-time data collection. Acceptable interrater reliability requires the use of strict task definitions, sophisticated software, and rigorous observer training. New techniques must be developed to more accurately capture multitasking. Substantial effort is required to conduct task analyses that will have sufficient reliability for purposes of research or clinical evaluation.


2000 ◽  
Vol 80 (2) ◽  
pp. 168-178 ◽  
Author(s):  
Suh-Fang Jeng ◽  
Kuo-Inn Tsou Yau ◽  
Li-Chiou Chen ◽  
Shu-Fang Hsiao

Abstract Background and Purpose. The goal of this study was to examine the reliability and validity of measurements obtained with the Alberta Infant Motor Scale (AIMS) for evaluation of preterm infants in Taiwan. Subjects. Two independent groups of preterm infants were used to investigate the reliability (n=45) and validity (n=41) for the AIMS. Methods. In the reliability study, the AIMS was administered to the infants by a physical therapist, and infant performance was videotaped. The performance was then rescored by the same therapist and by 2 other therapists to examine the intrarater and interrater reliability. In the validity study, the AIMS and the Bayley Motor Scale were administered to the infants at 6 and 12 months of age to examine criterion-related validity. Results. Intraclass correlation coefficients (ICCs) for intrarater and interrater reliability of measurements obtained with the AIMS were high (ICC=.97–.99). The AIMS scores correlated with the Bayley Motor Scale scores at 6 and 12 months (r=.78 and .90), although the AIMS scores at 6 months were only moderately predictive of the motor function at 12 months (r=.56). Conclusion and Discussion. The results suggest that measurements obtained with the AIMS have acceptable reliability and concurrent validity but limited predictive value for evaluating preterm Taiwanese infants.


2012 ◽  
Vol 92 (9) ◽  
pp. 1197-1207 ◽  
Author(s):  
Parminder K. Padgett ◽  
Jesse V. Jacobs ◽  
Susan L. Kasser

Background The Balance Evaluation Systems Test (BESTest) and Mini-BESTest are clinical examinations of balance impairment, but the tests are lengthy and the Mini-BESTest is theoretically inconsistent with the BESTest. Objective The purpose of this study was to generate an alternative version of the BESTest that is valid, reliable, time efficient, and founded upon the same theoretical underpinnings as the original test. Design This was a cross-sectional study. Methods Three raters evaluated 20 people with and without a neurological diagnosis. Test items with the highest item-section correlations defined the new Brief-BESTest. The validity of the BESTest, the Mini-BESTest, and the new Brief-BESTest to identify people with or without a neurological diagnosis was compared. Interrater reliability of the test versions was evaluated by intraclass correlation coefficients. Validity was further investigated by determining the ability of each version of the examination to identify the fall status of a second cohort of 26 people with and without multiple sclerosis. Results Items of hip abductor strength, functional reach, one-leg stance, lateral push-and-release, standing on foam with eyes closed, and the Timed “Up & Go” Test defined the Brief-BESTest. Intraclass correlation coefficients for all examination versions were greater than .98. The accuracy of identifying people from the first cohort with or without a neurological diagnosis was 78% for the BESTest versus 72% for the Mini-BESTest or Brief-BESTest. The sensitivity to fallers from the second cohort was 100% for the Brief-BESTest, 71% for the Mini-BESTest, and 86% for the BESTest, and all versions exhibited specificity of 95% to 100% to identify nonfallers. Limitations Further testing is needed to improve the generalizability of findings. Conclusions Although preliminary, the Brief-BESTest demonstrated reliability comparable to that of the Mini-BESTest and potentially superior sensitivity while requiring half the items of the Mini-BESTest and representing all theoretically based sections of the original BESTest.


2004 ◽  
Vol 84 (10) ◽  
pp. 906-918 ◽  
Author(s):  
Diane M Wrisley ◽  
Gregory F Marchetti ◽  
Diane K Kuharsky ◽  
Susan L Whitney

Background and Purpose. The Functional Gait Assessment (FGA) is a 10-item gait assessment based on the Dynamic Gait Index. The purpose of this study was to evaluate the reliability, internal consistency, and validity of data obtained with the FGA when used with people with vestibular disorders. Subjects. Seven physical therapists from various practice settings, 3 physical therapist students, and 6 patients with vestibular disorders volunteered to participate. Methods. All raters were given 10 minutes to review the instructions, the test items, and the grading criteria for the FGA. The 10 raters concurrently rated the performance of the 6 patients on the FGA. Patients completed the FGA twice, with an hour's rest between sessions. Reliability of total FGA scores was assessed using intraclass correlation coefficients (2,1). Internal consistency of the FGA was assessed using the Cronbach alpha and confirmatory factor analysis. Concurrent validity was assessed using the correlation of the FGA scores with balance and gait measurements. Results. Intraclass correlation coefficients of .86 and .74 were found for interrater and intrarater reliability of the total FGA scores. Internal consistency of the FGA scores was .79. Spearman rank order correlation coefficients of the FGA scores with balance measurements ranged from .11 to .67. Discussion and Conclusion. The FGA demonstrates what we believe is acceptable reliability, internal consistency, and concurrent validity with other balance measures used for patients with vestibular disorders.


2019 ◽  
Vol 29 (1) ◽  
pp. 32295
Author(s):  
Margareth Rodrigues Salerno ◽  
Fábio Herrmann ◽  
Leticia Manoel Debon ◽  
Matheus Dorigatti Soldatelli ◽  
Gabriele Carra Forte ◽  
...  

AIMS: To validate the Brazilian version of the Fresno test of competence in Evidence-Based Medicine.METHODS: This is a cross-sectional, validation study. Phase 1: translation of the Fresno instrument. Phase 2: validation of the translated version, which was tested in 70 undergraduate medical students. The psychometric properties evaluated were validity, internal consistency, and sensitivity to change.RESULTS: Overall, validity was adequate; most items showed a moderate to strong and significant correlation with the total score; there was an important and significant difference between both groups, with and without previous contact with Evidence-Based Medicine (median, 55 [IQ25-75, 45.2-61.7] vs. median, 18.5 [IQ25-75, 6.0-29.7]) (p <0.001). Internal consistency was also adequate (α-C 0.718), and sensitivity to change showed a considerable and significant difference between pre and post-test (median, 18.5 [IQ25-75, 6.0-29.7] vs. median, 44 [IQ25-75, 34.0-60.0]) (p <0.001).CONCLUSIONS: The Brazilian version of the Fresno test showed satisfactory psychometric properties, and it can now be used as a tool to assess the knowledge and skills of Evidence-Based Medicine in Brazilian medical students.


2021 ◽  
pp. 1-23
Author(s):  
Kara Vasil ◽  
Jessica Lewis ◽  
Christin Ray ◽  
Jodi Baxter ◽  
Claire Bernstein ◽  
...  

Purpose The Cochlear Implant Skills Review (CISR) was developed as a measure of cochlear implant (CI) users' skills and knowledge regarding device use. This study aimed to determine intra- and interrater reliability and agreement and establish construct validity for the CISR. Method In this study, the CISR was developed and administered to a cohort of 30 adult CI users. Participants included new CI users with less than 1 year of CI experience and experienced CI users with greater than 1 year of CI experience. The CISR administration required participants to demonstrate skills using the various features of their CI processors. Intra- and interrater reliability were assessed using intraclass correlation coefficients, agreement was assessed using Cohen's kappa, and construct validity was assessed by relating CISR performance to duration of CI use. Results Overall reliability for the entire instrument was 92.7%. Inter- and intrarater agreement were generally substantial or higher. Duration of CI use was a significant predictor of CISR performance. Conclusions The CISR is a reliable and valid assessment measure of device skills and knowledge for adult CI users. Clinicians can use this tool to evaluate areas of needed instruction and counseling and to assess users' skills over time.


Sign in / Sign up

Export Citation Format

Share Document