scholarly journals Objective Assessment of Microsurgery Competency—In Search of a Validated Tool

2019 ◽  
Vol 52 (02) ◽  
pp. 216-221
Author(s):  
Sheeja Rajan ◽  
Ranjith Sathyan ◽  
L. S. Sreelesh ◽  
Anu Anto Kallerey ◽  
Aarathy Antharjanam ◽  
...  

AbstractMicrosurgical skill acquisition is an integral component of training in plastic surgery. Current microsurgical training is based on the subjective Halstedian model. An ideal microsurgery assessment tool should be able to deconstruct all the subskills of microsurgery and assess them objectively and reliably. For our study, to analyze the feasibility, reliability, and validity of microsurgery skill assessment, a video-based objective structured assessment of technical skill tool was chosen. Two blinded experts evaluated 40 videos of six residents performing microsurgical anastomosis for arteriovenous fistula surgery. The generic Reznick's global rating score (GRS) and University of Western Ontario microsurgical skills acquisition/assessment (UWOMSA) instrument were used as checklists. Correlation coefficients of 0.75 to 0.80 (UWOMSA) and 0.71 to 0.77 (GRS) for interrater and intrarater reliability showed that the assessment tools were reliable. Convergent validity of the UWOMSA tool with the prevalidated GRS tool showed good agreement. The mean improvement of scores with years of residency was measured with analysis of variance. Both UWOMSA (p-value: 0.034) and GRS (p-value: 0.037) demonstrated significant improvement in scores from postgraduate year 1 (PGY1) to PGY2 and a less marked improvement from PGY2 to PGY3. We conclude that objective assessment of microsurgical skills in an actual clinical setting is feasible. Tools like UWOMSA are valid and reliable for microsurgery assessment and provide feedback to chart progression of learning. Acceptance and validation of such objective assessments will help to improve training and bring uniformity to microsurgery education.

Author(s):  
Khamis Elessi ◽  
Shireen Abed ◽  
Tayseer Jamal Afifi ◽  
Rawan Utt ◽  
Mahmood Elblbessy ◽  
...  

Background: Neonates frequently experience pain as a result of diagnostic or therapeutic interventions or as a result of a disease process. Neonates cannot verbalize their pain experience and depend on others to recognize, assess and manage their pain. Neonates may suffer immediate or long-term consequences of unrelieved pain. Accurate assessment of pain is essential to provide adequate management. Observational scales, which include physiological and behavioral responses to pain, are available to aid consistent pain management. Pain assessment is considered as the fifth vital sign. Objectives: Aims of the present study were (1) to compare two commonly cited neonatal pain assessment tools, Neonatal Pain, Agitation and Sedation Scale (N-PASS) and modified Pain Assessment Tool (mPAT), with regard to their psychometric qualities, (2) to explore intuitive clinicians' ratings by relating them to the tools' items and (3) to ensure that neonates receive adequate pain control. Methods: Two coders applied both pain assessment tools to 850 neonates while undergoing a painful or a stressful procedure. Each neonate was assessed before, during and after the procedure. The evaluation before and after the procedure was done using NPASS, while pain score during the procedure was assessed by mPAT. Analyses of variances and regression analyses were used to investigate whether tools could discriminate between the procedures and whether tools' items were predictors of pain severity. Results: Internal consistency, reliability and validity were high for both assessment tools. N-PASS tool discriminated between painful and stressful situations better than mPAT. There was no relation between the age of neonate and the pain score. Moreover, P-value was statistically significant between mPAT score and post procedural assessment score as well as between pre and post procedural assessment scores. Conclusion: Both assessment tools performed equally well regarding physiologic parameters. However, N-PASS makes it possible to assess pain during sedation. It was noticed that gaps exist between practitioner knowledge and attitude regarding neonatal pain.


2016 ◽  
Vol 156 (1) ◽  
pp. 61-69 ◽  
Author(s):  
Rishabh Sethia ◽  
Thomas F. Kerwin ◽  
Gregory J. Wiet

Objective The aim of this report is to provide a review of the current literature for assessment of performance for mastoidectomy, to identify the current assessment tools available in the literature, and to summarize the evidence for their validity. Data Sources The MEDLINE database was accessed via PubMed. Review Methods Inclusion criteria consisted of English-language published articles that reported use of a mastoidectomy performance assessment tool. Studies ranged from 2007 to November 2015 and were divided into 2 groups: intraoperative assessments and those performed with simulation (cadaveric laboratory or virtual reality). Studies that contained specific reliability analyses were also highlighted. For each publication, validity evidence data were analyzed and interpreted according to conceptual definitions provided in a recent systematic review on the modern framework of validity evidence. Conclusions Twenty-three studies were identified that met our inclusion criteria for review, including 4 intraoperative objective assessment studies, 5 cadaveric studies, 10 virtual reality simulation studies, and 4 that used both cadaveric assessment and virtual reality. Implications for Practice A review of the literature revealed a wide variety of mastoidectomy assessment tools and varying levels of reliability and validity evidence. The assessment tool developed at Johns Hopkins possesses the most validity evidence of those reviewed. However, a number of agreed-on specific metrics could be integrated into a standardized assessment instrument to be used nationally. A universally agreed-on assessment tool will provide a means for developing standardized benchmarks for performing mastoid surgery.


2021 ◽  
Author(s):  
Huiqi Song ◽  
Jing Jing Wang ◽  
Patrick WC Lau

Abstract Background: The assessment of perschoolers’ motor skills is essential to know young children’s motor development and evaluate the intervention effects of promotion in children’s sport activities. The purpose of this study was to review the motor skills assessment tools in Chinese preschool-aged children, compare them in the international context, and provide guidelines to find appropriate motor skill assessment tool in China. Methods: The comprehensive literature search was carried out in WANGFAGN, CNKI, VIP, ERIC, EMBASE, MEDLINE, Ovid PsycINFO, SPORTDiscus and BIOSIS previews databases. Relevant articles published between January 2000 and May 2020 were retrieved. Studies that described the discriminative and evaluative measures of motor skills among the population aged 3-6 years in China were included. Results: A total of 17 studies were included in this review describing 7 tools including 4 self-developed tools and 3 international tools used in China. TGMD-2 appeared in a large proportion of studies, international tools used in China were incomplete in terms of translation, verification of reliability and validity, item selection and the implementation. Regarding the self-constructed tools, the CDCC was the most utilized self-developed tool, but it was mainly applied in intellectual development assessment. Through the comparison between Chinese self-constructed and international tools, the construction of the CDCC and the Gross Motor Development Assessment Scale contained relatively complete development steps. The test content, validity and reliability, implementation instruction, and generalizability of self-constructed tools are still lacking. Conclusions: Both international and self-developed motor skills assessment tools have been rarely applied in China, available tools lack enough validation and appropriate adjustments. Cultural differences in motor development between Chinese and Western populations should be considered when constructing a Chinese localized MSAT.


2019 ◽  
Vol 11 (4) ◽  
pp. 422-429
Author(s):  
Jason A. Lord ◽  
Danny J. Zuege ◽  
Maria Palacios Mackay ◽  
Amanda Roze des Ordons ◽  
Jocelyn Lockyer

ABSTRACT Background Determining procedural competence requires psychometrically sound assessment tools. A variety of instruments are available to determine procedural performance for central venous catheter (CVC) insertion, but it is not clear which ones should be used in the context of competency-based medical education. Objective We compared several commonly used instruments to determine which should be preferentially used to assess competence in CVC insertion. Methods Junior residents completing their first intensive care unit rotation between July 31, 2006, and March 9, 2007, were video-recorded performing CVC insertion on task trainer mannequins. Between June 1, 2016, and September 30, 2016, 3 experienced raters judged procedural competence on the historical video recordings of resident performance using 4 separate tools, including an itemized checklist, Objective Structured Assessment of Technical Skills (OSATS), a critical error assessment tool, and the Ottawa Surgical Competency Operating Room Evaluation (O-SCORE). Generalizability theory (G-theory) was used to compare the performance characteristics among the tools. A decision study predicted the optimal testing environment using the tools. Results At the time of the original recording, 127 residents rotated through intensive care units at the University of Calgary, Alberta, Canada. Seventy-seven of them (61%) met inclusion criteria, and 55 of those residents (71%) agreed to participate. Results from the generalizability study (G-study) demonstrated that scores from O-SCORE and OSATS were the most dependable. Dependability could be maintained for O-SCORE and OSATS with 2 raters. Conclusions Our results suggest that global rating scales, such as the OSATS or the O-SCORE tools, should be preferentially utilized for assessment of competence in CVC insertion.


2014 ◽  
Vol 120 (1) ◽  
pp. 196-203 ◽  
Author(s):  
Maya Jalbout Hastie ◽  
Jessica L. Spellman ◽  
Parwane P. Pagano ◽  
Jonathan Hastie ◽  
Brian J. Egan

Abstract Since its description in 1974, the Objective Structured Clinical Examination (OSCE) has gained popularity as an objective assessment tool of medical students, residents, and trainees. With the development of the anesthesiology residents’ milestones and the preparation for the Next Accreditation System, there is an increased interest in OSCE as an evaluation tool of the six core competencies and the corresponding milestones proposed by the Accreditation Council for Graduate Medical Education. In this article the authors review the history of OSCE and its current application in medical education and in different medical and surgical specialties. They also review the use of OSCE by anesthesiology programs and certification boards in the United States and internationally. In addition, they discuss the psychometrics of test design and implementation with emphasis on reliability and validity measures as they relate to OSCE.


2017 ◽  
Vol 158 (1) ◽  
pp. 54-61 ◽  
Author(s):  
Érika Mercier ◽  
Ségolène Chagnon-Monarque ◽  
François Lavigne ◽  
Tareck Ayad

Objectives The primary goal is the indexation of validated methods used to assess surgical competency in otorhinolaryngology–head and neck surgery (ORL-HNS) residents. Secondary goals include assessment of the reliability and validity of these tools, as well as the documentation of specific procedures in ORL-HNS involved. Data Sources MEDBASE, OVID, Medline, CINAHL, and EBM, as well as the printed references, available through the Université de Montréal library. Review Methods The PRISMA method was used to review digital and printed databases. Publications were reviewed by 2 independent reviewers, and selected articles were fully analyzed to classify evaluation methods and categorize them by procedure and subspecialty of ORL-HNS involved. Reliability and validity were assessed and scored for each assessment tool. Results Through the review of 30 studies, 5 evaluation methods were described and validated to assess the surgical competency of ORL-HNS residents. The evaluation method most often described was the combined Global Rating Scale and Task-Specific Checklist tool. Reliability and validity for this tool were overall high; however, considerable data were unavailable. Eleven distinctive surgical procedures were studied, encompassing many subspecialties of ORL-HNS: facial plastics, general ear-nose-throat, laryngology, otology, pediatrics, and rhinology. Conclusions Although assessment tools have been developed for an array of surgical procedures, involving most ORL-HNS subspecialties, the use of combined checklists has been repeatedly validated in the literature and shown to be easily applicable in practice. It has been applied to many ORL-HNS procedures but not in oncologic surgery to date.


2016 ◽  
Vol 49 (3) ◽  
pp. 255-282 ◽  
Author(s):  
Wouter Poortinga ◽  
Tatiana Calve ◽  
Nikki Jones ◽  
Simon Lannon ◽  
Tabitha Rees ◽  
...  

Various studies have shown that neighborhood quality is linked to neighborhood attachment and satisfaction. However, most have relied upon residents’ own perceptions rather than independent observations of the neighborhood environment. This study examines the reliability and validity of the revised Residential Environment Assessment Tool (REAT 2.0), an audit instrument covering both public and private spaces of the neighborhood environment. The research shows that REAT 2.0 is a reliable, easy-to-use instrument and that most underlying constructs can be validated against residents’ own neighborhood perceptions. The convergent validity of the instrument, which was tested against digital map data, can be improved for a number of miscellaneous urban form items. The research further found that neighborhood attachment was significantly associated with the overall REAT 2.0 score. This association can mainly be attributed to the property-level neighborhood quality and natural elements components. The research demonstrates the importance of private spaces in the outlook of the neighborhood environment.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
Louise Inkeri Hennings ◽  
Jette Led Sørensen ◽  
Jane Hybscmann ◽  
Jeanett Strandbygaard

Abstract Background Standardised assessment is key to structured surgical training. Currently, there is no consensus on which surgical assessment tool to use in live gynaecologic surgery. The purpose of this review is to identify assessment tools measuring technical skills in gynaecologic surgery and evaluate the measurement characteristics of each tool. Method We utilized the scoping review methodology and searched PubMed, Medline, Embase and Cochrane. Inclusion criteria were studies that analysed assessment tools in live gynaecologic surgery. Kane’s validity argument was applied to evaluate the assessment tools in the included studies. Results Eight studies out of the 544 identified fulfilled the inclusion criteria. The assessment tools were categorised as global rating scales, global and procedure rating scales combined, procedure-specific rating scales or as a non-procedure-specific error assessment tool. Conclusion This scoping review presents the current different tools for observational assessment of technical skills in intraoperative, gynaecologic surgery. This scoping review can serve as a guide for surgical educators who want to apply a scale or a specific tool in surgical assessment.


2019 ◽  
Vol 4 (2) ◽  
pp. 34-40
Author(s):  
Edsel O Coronado

This research was conducted to examine the tools, strategies, and problems encountered in assessing student learning by pre-service teachers in science during their on-and-off campus clinical experience. An explanatory sequential mixed method design was used in the study. Three instruments were used in this study: The Assessment Checklist for Student Teachers in Science, Focus Group Discussion (FGD) Questions, and the In-depth Interview Questions. 17 pre-service teachers participated from one teacher education institution. Findings of the study using Kruskal-Wallis One-way Analysis of Variance and Thematical Analysis using Phenomenological Reduction Method revealed the assessment tools used most frequently and least frequently, assessment strategies, and the problems encountered by pre-service teachers in science in assessing student learning. The findings also revealed that there was a significant difference in the use of rubric (p value=0.045) as the least frequently used assessment tool by pre-service teachers in science when grouped according to specialization.


Sign in / Sign up

Export Citation Format

Share Document