Development of a Model for the Acquisition and Assessment of Advanced Laparoscopic Suturing Skills Using an Automated Device

2018 ◽  
Vol 25 (3) ◽  
pp. 286-290 ◽  
Author(s):  
Elif Bilgic ◽  
Madoka Takao ◽  
Pepa Kaneva ◽  
Satoshi Endo ◽  
Toshitatsu Takao ◽  
...  

Background. Needs assessment identified a gap regarding laparoscopic suturing skills targeted in simulation. This study collected validity evidence for an advanced laparoscopic suturing task using an Endo StitchTM device. Methods. Experienced (ES) and novice surgeons (NS) performed continuous suturing after watching an instructional video. Scores were based on time and accuracy, and Global Operative Assessment of Laparoscopic Surgery. Data are shown as medians [25th-75th percentiles] (ES vs NS). Interrater reliability was calculated using intraclass correlation coefficients (confidence interval). Results. Seventeen participants were enrolled. Experienced surgeons had significantly greater task (980 [964-999] vs 666 [391-711], P = .0035) and Global Operative Assessment of Laparoscopic Surgery scores (25 [24-25] vs 14 [12-17], P = .0029). Interrater reliability for time and accuracy were 1.0 and 0.9 (0.74-0.96), respectively. All experienced surgeons agreed that the task was relevant to practice. Conclusion. This study provides validity evidence for the task as a measure of laparoscopic suturing skill using an automated suturing device. It could help trainees acquire the skills they need to better prepare for clinical learning.

1991 ◽  
Vol 34 (5) ◽  
pp. 989-999 ◽  
Author(s):  
Stephanie Shaw ◽  
Truman E. Coggins

This study examines whether observers reliably categorize selected speech production behaviors in hearing-impaired children. A group of experienced speech-language pathologists was trained to score the elicited imitations of 5 profoundly and 5 severely hearing-impaired subjects using the Phonetic Level Evaluation (Ling, 1976). Interrater reliability was calculated using intraclass correlation coefficients. Overall, the magnitude of the coefficients was found to be considerably below what would be accepted in published behavioral research. Failure to obtain acceptably high levels of reliability suggests that the Phonetic Level Evaluation may not yet be an accurate and objective speech assessment measure for hearing-impaired children.


1997 ◽  
Vol 17 (4) ◽  
pp. 280-287 ◽  
Author(s):  
Margaret Wallen ◽  
Mary-Ann Bonney ◽  
Lyn Lennox

The Handwriting Speed Test (HST), a standardized, norm-referenced test, was developed to provide an objective evaluation of the handwriting speed of school students from approximately 8 to 18 years of age. Part of the test development involved an examination of interrater reliability. Two raters scored 165 (13%) of the total 1292 handwriting samples. Using intraclass correlation coefficients, the interrater reliability was found to be excellent (ICC=1.00, P<0.0001). The process of examining interrater reliability resulted in modification to the scoring criteria of the test. Excellent interrater reliability provides support for the HST as a valuable clinical and research tool.


2013 ◽  
Vol 5 (2) ◽  
pp. 252-256 ◽  
Author(s):  
Hans B. Kersten ◽  
John G. Frohna ◽  
Erin L. Giudice

Abstract Background Competence in evidence-based medicine (EBM) is an important clinical skill. Pediatrics residents are expected to acquire competence in EBM during their education, yet few validated tools exist to assess residents' EBM skills. Objective We sought to develop a reliable tool to evaluate residents' EBM skills in the critical appraisal of a research article, the development of a written EBM critically appraised topic (CAT) synopsis, and a presentation of the findings to colleagues. Methods Instrument development used a modified Delphi technique. We defined the skills to be assessed while reviewing (1) a written CAT synopsis and (2) a resident's EBM presentation. We defined skill levels for each item using the Dreyfus and Dreyfus model of skill development and created behavioral anchors using a frame-of-reference training technique to describe performance for each skill level. We evaluated the assessment instrument's psychometric properties, including internal consistency and interrater reliability. Results The EBM Critically Appraised Topic Presentation Evaluation Tool (EBM C-PET) is composed of 14 items that assess residents' EBM and global presentation skills. Resident presentations (N  =  27) and the corresponding written CAT synopses were evaluated using the EBM C-PET. The EBM C-PET had excellent internal consistency (Cronbach α  =  0.94). Intraclass correlation coefficients were used to assess interrater reliability. Intraclass correlation coefficients for individual items ranged from 0.31 to 0.74; the average intraclass correlation coefficients for the 14 items was 0.67. Conclusions We identified essential components of an assessment tool for an EBM CAT synopsis and presentation with excellent internal consistency and a good level of interrater reliability across 3 different institutions. The EBM C-PET is a reliable tool to document resident competence in higher-level EBM skills.


2019 ◽  
Vol 5 (2) ◽  
pp. 294-323 ◽  
Author(s):  
Charles Nagle

Abstract Researchers have increasingly turned to Amazon Mechanical Turk (AMT) to crowdsource speech data, predominantly in English. Although AMT and similar platforms are well positioned to enhance the state of the art in L2 research, it is unclear if crowdsourced L2 speech ratings are reliable, particularly in languages other than English. The present study describes the development and deployment of an AMT task to crowdsource comprehensibility, fluency, and accentedness ratings for L2 Spanish speech samples. Fifty-four AMT workers who were native Spanish speakers from 11 countries participated in the ratings. Intraclass correlation coefficients were used to estimate group-level interrater reliability, and Rasch analyses were undertaken to examine individual differences in rater severity and fit. Excellent reliability was observed for the comprehensibility and fluency ratings, but indices were slightly lower for accentedness, leading to recommendations to improve the task for future data collection.


2002 ◽  
Vol 96 (5) ◽  
pp. 1129-1139 ◽  
Author(s):  
Jason Slagle ◽  
Matthew B. Weinger ◽  
My-Than T. Dinh ◽  
Vanessa V. Brumer ◽  
Kevin Williams

Background Task analysis may be useful for assessing how anesthesiologists alter their behavior in response to different clinical situations. In this study, the authors examined the intraobserver and interobserver reliability of an established task analysis methodology. Methods During 20 routine anesthetic procedures, a trained observer sat in the operating room and categorized in real-time the anesthetist's activities into 38 task categories. Two weeks later, the same observer performed task analysis from videotapes obtained intraoperatively. A different observer performed task analysis from the videotapes on two separate occasions. Data were analyzed for percent of time spent on each task category, average task duration, and number of task occurrences. Rater reliability and agreement were assessed using intraclass correlation coefficients. Results Intrarater reliability was generally good for categorization of percent time on task and task occurrence (mean intraclass correlation coefficients of 0.84-0.97). There was a comparably high concordance between real-time and video analyses. Interrater reliability was generally good for percent time and task occurrence measurements. However, the interrater reliability of the task duration metric was unsatisfactory, primarily because of the technique used to capture multitasking. Conclusions A task analysis technique used in anesthesia research for several decades showed good intrarater reliability. Off-line analysis of videotapes is a viable alternative to real-time data collection. Acceptable interrater reliability requires the use of strict task definitions, sophisticated software, and rigorous observer training. New techniques must be developed to more accurately capture multitasking. Substantial effort is required to conduct task analyses that will have sufficient reliability for purposes of research or clinical evaluation.


2000 ◽  
Vol 80 (2) ◽  
pp. 168-178 ◽  
Author(s):  
Suh-Fang Jeng ◽  
Kuo-Inn Tsou Yau ◽  
Li-Chiou Chen ◽  
Shu-Fang Hsiao

Abstract Background and Purpose. The goal of this study was to examine the reliability and validity of measurements obtained with the Alberta Infant Motor Scale (AIMS) for evaluation of preterm infants in Taiwan. Subjects. Two independent groups of preterm infants were used to investigate the reliability (n=45) and validity (n=41) for the AIMS. Methods. In the reliability study, the AIMS was administered to the infants by a physical therapist, and infant performance was videotaped. The performance was then rescored by the same therapist and by 2 other therapists to examine the intrarater and interrater reliability. In the validity study, the AIMS and the Bayley Motor Scale were administered to the infants at 6 and 12 months of age to examine criterion-related validity. Results. Intraclass correlation coefficients (ICCs) for intrarater and interrater reliability of measurements obtained with the AIMS were high (ICC=.97–.99). The AIMS scores correlated with the Bayley Motor Scale scores at 6 and 12 months (r=.78 and .90), although the AIMS scores at 6 months were only moderately predictive of the motor function at 12 months (r=.56). Conclusion and Discussion. The results suggest that measurements obtained with the AIMS have acceptable reliability and concurrent validity but limited predictive value for evaluating preterm Taiwanese infants.


2012 ◽  
Vol 92 (9) ◽  
pp. 1197-1207 ◽  
Author(s):  
Parminder K. Padgett ◽  
Jesse V. Jacobs ◽  
Susan L. Kasser

Background The Balance Evaluation Systems Test (BESTest) and Mini-BESTest are clinical examinations of balance impairment, but the tests are lengthy and the Mini-BESTest is theoretically inconsistent with the BESTest. Objective The purpose of this study was to generate an alternative version of the BESTest that is valid, reliable, time efficient, and founded upon the same theoretical underpinnings as the original test. Design This was a cross-sectional study. Methods Three raters evaluated 20 people with and without a neurological diagnosis. Test items with the highest item-section correlations defined the new Brief-BESTest. The validity of the BESTest, the Mini-BESTest, and the new Brief-BESTest to identify people with or without a neurological diagnosis was compared. Interrater reliability of the test versions was evaluated by intraclass correlation coefficients. Validity was further investigated by determining the ability of each version of the examination to identify the fall status of a second cohort of 26 people with and without multiple sclerosis. Results Items of hip abductor strength, functional reach, one-leg stance, lateral push-and-release, standing on foam with eyes closed, and the Timed “Up & Go” Test defined the Brief-BESTest. Intraclass correlation coefficients for all examination versions were greater than .98. The accuracy of identifying people from the first cohort with or without a neurological diagnosis was 78% for the BESTest versus 72% for the Mini-BESTest or Brief-BESTest. The sensitivity to fallers from the second cohort was 100% for the Brief-BESTest, 71% for the Mini-BESTest, and 86% for the BESTest, and all versions exhibited specificity of 95% to 100% to identify nonfallers. Limitations Further testing is needed to improve the generalizability of findings. Conclusions Although preliminary, the Brief-BESTest demonstrated reliability comparable to that of the Mini-BESTest and potentially superior sensitivity while requiring half the items of the Mini-BESTest and representing all theoretically based sections of the original BESTest.


2021 ◽  
pp. 1-23
Author(s):  
Kara Vasil ◽  
Jessica Lewis ◽  
Christin Ray ◽  
Jodi Baxter ◽  
Claire Bernstein ◽  
...  

Purpose The Cochlear Implant Skills Review (CISR) was developed as a measure of cochlear implant (CI) users' skills and knowledge regarding device use. This study aimed to determine intra- and interrater reliability and agreement and establish construct validity for the CISR. Method In this study, the CISR was developed and administered to a cohort of 30 adult CI users. Participants included new CI users with less than 1 year of CI experience and experienced CI users with greater than 1 year of CI experience. The CISR administration required participants to demonstrate skills using the various features of their CI processors. Intra- and interrater reliability were assessed using intraclass correlation coefficients, agreement was assessed using Cohen's kappa, and construct validity was assessed by relating CISR performance to duration of CI use. Results Overall reliability for the entire instrument was 92.7%. Inter- and intrarater agreement were generally substantial or higher. Duration of CI use was a significant predictor of CISR performance. Conclusions The CISR is a reliable and valid assessment measure of device skills and knowledge for adult CI users. Clinicians can use this tool to evaluate areas of needed instruction and counseling and to assess users' skills over time.


Author(s):  
James C. Borders ◽  
Jordanna S. Sevitz ◽  
Jaime Bauer Malandraki ◽  
Georgia A. Malandraki ◽  
Michelle S. Troche

Purpose The COVID-19 pandemic has drastically increased the use of telehealth. Prior studies of telehealth clinical swallowing evaluations provide positive evidence for telemanagement of swallowing. However, the reliability of these measures in clinical practice, as opposed to well-controlled research conditions, remains unknown. This study aimed to investigate the reliability of outcome measures derived from clinical swallowing tele-evaluations in real-world clinical practice (e.g., variability in devices and Internet connectivity, lack of in-person clinician assistance, or remote patient/caregiver training). Method Seven raters asynchronously judged clinical swallowing tele-evaluations of 12 movement disorders patients. Outcomes included the Timed Water Swallow Test (TWST), Test of Masticating and Swallowing Solids (TOMASS), and common observations of oral intake. Statistical analyses were performed to examine inter- and intrarater reliability, as well as qualitative analyses exploring patient and clinician-specific factors impacting reliability. Results Forty-four trials were included for reliability analyses. All rater dyads demonstrated “good” to “excellent” interrater reliability for measures of the TWST (intraclass correlation coefficients [ICCs] ≥ .93) and observations of oral intake (≥ 77% agreement). The majority of TOMASS outcomes demonstrated “good” to “excellent” interrater reliability (ICCs ≥ .84), with the exception of the number of bites (ICCs = .43–.99) and swallows (ICCs = .21–.85). Immediate and delayed intrarater reliability were “excellent” for most raters across all tasks, ranging between ICCs of .63 and 1.00. Exploratory factors potentially impacting reliability included infrequent instances of suboptimal video quality, reduced camera stability, camera distance, and obstruction of the patient's mouth during tasks. Conclusions Subjective observations of oral intake and objective measures taken from the TWST and the TOMASS can be reliably measured via telehealth in clinical practice. Our results provide support for the feasibility and reliability of telehealth for outpatient clinical swallowing evaluations during COVID-19 and beyond. Supplemental Material https://doi.org/10.23641/asha.13661378


1998 ◽  
Vol 6 (4) ◽  
pp. 317-326 ◽  
Author(s):  
Johanne Desrosiers ◽  
François Prince ◽  
Annie Rochette ◽  
Michel Raîche

The objectives of this study were to standardize measurement procedures and study the test-retest and interrater reliability of the belt-resisted method for measuring the lower extremity isometric strength of three muscle groups. The strength of 33 healthy, elderly, community-dwelling subjects was evaluated with a hand-held dynamometer using the belt-resisted method. Isometric strength testing of three muscle groups (hip flexors, knee extensors, and ankle dorsiflexors) was performed on two separate occasions, I week apart, by the same tester to determine test-retest reliability. The test results of two different examiners testing on different days were used to determine interrater reliability. Test-retest reliability was higher than interrater reliability. Test-retest reliability coefficients of the three muscle groups were high (J9-.95). For interrater reliability, intraclass correlation coefficients varied from .64 to .92. depending on the muscle group and side. For the two kinds of reliability, intraclass correlation coefficients increased from proximal to distal. The method for the hip muscle group should be modified to increase reliability of the measure.


Sign in / Sign up

Export Citation Format

Share Document