Changes in scoring of Direct Observation of Procedural Skills (DOPS) forms and the impact on competence assessment

Endoscopy ◽  
2018 ◽  
Vol 50 (08) ◽  
pp. 770-778 ◽  
Author(s):  
Keith Siau ◽  
Paul Dunckley ◽  
Roland Valori ◽  
Mark Feeney ◽  
Neil Hawkes ◽  
...  

Abstract Background Direct Observation of Procedural Skills (DOPS) is an established competence assessment tool in endoscopy. In July 2016, the DOPS scoring format changed from a performance-based scale to a supervision-based scale. We aimed to evaluate the impact of changes to the DOPS scale format on the distribution of scores in novice trainees and on competence assessment. Methods We performed a prospective, multicenter (n = 276), observational study of formative DOPS assessments in endoscopy trainees with ≤ 100 lifetime procedures. DOPS were submitted in the 6-months before July 2016 (old scale) and after (new scale) for gastroscopy (n = 2998), sigmoidoscopy (n = 1310), colonoscopy (n = 3280), and polypectomy (n = 631). Scores for old and new DOPS were aligned to a 4-point scale and compared. Results 8219 DOPS (43 % new and 57 % old) submitted for 1300 trainees were analyzed. Compared with old DOPS, the use of the new DOPS was associated with greater utilization of the lowest score (2.4 % vs. 0.9 %; P < 0.001), broader range of scores, and a reduction in competent scores (60.8 % vs. 86.9 %; P < 0.001). The reduction in competent scores was evident on subgroup analysis across all procedure types (P < 0.001) and for each quartile of endoscopy experience. The new DOPS was superior in characterizing the endoscopy learning curve by demonstrating progression of competent scores across quartiles of procedural experience. Conclusions Endoscopy assessors applied a greater range of scores using the new DOPS scale based on degree of supervision in two cohorts of trainees matched for experience. Our study provides construct validity evidence in support of the new scale format.

Endoscopy ◽  
2018 ◽  
Vol 50 (08) ◽  
pp. C9-C9
Author(s):  
Keith Siau ◽  
Paul Dunckley ◽  
Roland Valori ◽  
Mark Feeney ◽  
Neil Hawkes ◽  
...  

2021 ◽  
Vol 4 (Supplement_1) ◽  
pp. 71-73
Author(s):  
R Khan ◽  
E Zheng ◽  
S B Wani ◽  
M A Scaffidi ◽  
T Jeyalingam ◽  
...  

Abstract Background An increasing focus on quality and safety in colonoscopy has led to broader implementation of competency-based educational systems that enable documentation of trainees’ achievement of the knowledge, skills, and attitudes needed for independent practice. The meaningful assessment of competence in colonoscopy is critical to this process. While there are many published tools that assess competence in performing colonoscopy, there is a wide range of underlying validity evidence. Tools with strong evidence of validity are required to support feedback provision, optimize learner capabilities, and document competence. Aims We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1–15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5–11.5 (maximum score 14.5). Conclusions The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing purposes. Funding Agencies None


2018 ◽  
Vol 67 (6) ◽  
pp. e111-e116 ◽  
Author(s):  
Keith Siau ◽  
Rachel Levi ◽  
Lucy Howarth ◽  
Raphael Broughton ◽  
Mark Feeney ◽  
...  

2019 ◽  
Vol 115 (2) ◽  
pp. 234-243 ◽  
Author(s):  
Keith Siau ◽  
James Crossley ◽  
Paul Dunckley ◽  
Gavin Johnson ◽  
Mark Feeney ◽  
...  

Anaesthesia ◽  
2014 ◽  
Vol 69 (6) ◽  
pp. 604-612 ◽  
Author(s):  
M. J. Watson ◽  
D. M. Wong ◽  
R. Kluger ◽  
A. Chuan ◽  
M. D. Herrick ◽  
...  

2020 ◽  
Vol 10 (9) ◽  
pp. 47
Author(s):  
Martha Mbewe ◽  
Natalia Mbewe ◽  
Catherine M. Ngoma

Direct observation of procedural skills is an evidence-based assessment tool used for assessing competence in the practical procedures that nursing students undertake during clinical placement. Having knowledge about students’ satisfaction with their educational issues is very important as it helps both faculty and students to achieve educational goals. However factors that may influence student satisfaction with this method of assessment are not known in the school of nursing sciences at the University of Zambia. The purpose of this study was to investigate factors influencing student satisfaction with direct observation of procedural skills in order to get students views on this assessment tool. A cross sectional quantitative survey was used on ninety two (92) conveniently sampled final year undergraduate nursing students. Data were collected using a validated self-reported questionnaire and analysed using IBM SPSS Version 20. Fisher’s exact tests were used to determine association between student satisfaction and the independent variables. A p-value of ≤ .05 was considered statistically significant. Major findings revealed that poor clinical environment 98.9% and faculty non availability 98% influenced student’s satisfaction negatively. Other significant associations where clinical experiences, and feedback, with p ≤ .05. In conclusion, the major factors that influenced student satisfaction included poor clinical environment which was not conducive for assessments as it lacked many essential medical surgical supplies and non-availability of faculty in the clinical area who were needed to provide feedback, guidance and supervision to the students. Improving the clinical environment, ensuring faculty availability in order to provide timely and constructive feedback may help students achieve their learning objectives. Direct observation of procedural skills still remains an appropriate tool to assess student clinical competence. However, further research and development of cheap, less stressful and economic methods of clinical evaluation to blend with this tool is required in the school of nursing sciences.


2020 ◽  
Vol 3 (Supplement_1) ◽  
pp. 149-150
Author(s):  
U Khan ◽  
A N Barkun ◽  
E I Benchimol ◽  
M Salim ◽  
J J Telford ◽  
...  

Abstract Background Previous studies have demonstrated that many graduating trainees may not have all of the skills needed to independently practice endoscopic retrograde cholangiopancreatography (ERCP) safely and effectively. As a part of competency-based learning curriculum development, it is essential to provide formative feedback to trainees and ensure they acquire the knowledge and skills for independent practice. Aims To assess the performance of advanced endoscopy trainees across Canada using the Canadian Direct Observation of Procedural Skills (CanDOPS) ERCP assessment tool. Procedural items evaluated include both technical (cannulation, sphincterotomy, stone extraction, tissue sampling, and stent placement) and non-technical (leadership, communication and teamwork, judgment and decision making) skills. Methods We conducted a prospective national multi-centre prospective study. Advanced endoscopy trainees with at least two years of gastroenterology training or five years of general surgery in North America and minimal experience performing ERCPs (less than 100 ERCP procedures) were invited to participate. The CanDOPS tool was used to measure every fifth ERCP performed by trainees over a 12-month fellowship training period. ERCPs were evaluated by experienced staff endoscopists at each study site under standard clinical protocol. Cumulative sum (CUSUM) analyses were used to generate learning curves. Results The data from five Canadian sites and 11 trainees participated in the study. A total of 261ERCP evaluations were completed. Median number of evaluations by site and trainee was 49 (IQR 31–76) and 15 (IQR 11–45). Median number of cases trainees performed prior to their ERCP training was 50 (IQR 25–400). There was a significant improvement in almost all scores over time, including selective cannulation, sphincterotomy, biliary stenting and all non-technical skills (P&lt;0.01). CUSUM analyses using acceptable and unacceptable failure rates of 20% and 50% demonstrated trainees achieved competency for most measures in their final month of their training. Competency in tissue sampling was not achieved within a one-year training period. Conclusions This is the first ERCP performance evaluation tool that examines multiple technical and non-technical aspects of the procedure. Although trainee ERCP skills do improve during their training period, there exists a notable variability in time to competency for the different skills measured using the CanDOPS tool. Large prospective research is required to determine if competency is achieved using more stringent definitions of ERCP competency and to determine factors associated with reaching competency. Funding Agencies None


Sign in / Sign up

Export Citation Format

Share Document