scholarly journals A97 TOOLS FOR DIRECT OBSERVATION AND ASSESSMENT OF COLONOSCOPY: A SYSTEMATIC REVIEW OF VALIDITY EVIDENCE

2021 ◽  
Vol 4 (Supplement_1) ◽  
pp. 71-73
Author(s):  
R Khan ◽  
E Zheng ◽  
S B Wani ◽  
M A Scaffidi ◽  
T Jeyalingam ◽  
...  

Abstract Background An increasing focus on quality and safety in colonoscopy has led to broader implementation of competency-based educational systems that enable documentation of trainees’ achievement of the knowledge, skills, and attitudes needed for independent practice. The meaningful assessment of competence in colonoscopy is critical to this process. While there are many published tools that assess competence in performing colonoscopy, there is a wide range of underlying validity evidence. Tools with strong evidence of validity are required to support feedback provision, optimize learner capabilities, and document competence. Aims We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1–15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5–11.5 (maximum score 14.5). Conclusions The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing purposes. Funding Agencies None

Endoscopy ◽  
2021 ◽  
Author(s):  
Rishad Khan ◽  
Eric Zheng ◽  
Sachin Wani ◽  
Michael A Scaffidi ◽  
Thurarshen Jeyalingam ◽  
...  

Background: Assessment tools are essential for endoscopy training, required to support feedback provision, optimize learner capabilities, and document competence. We aimed to evaluate the strength of validity evidence that supports available colonoscopy direct observation assessment tools using the unified framework of validity. Methods: We systematically searched five databases for studies investigating colonoscopy direct observation assessment tools from inception until April 8, 2020. We extracted data outlining validity evidence from the five sources (content, response process, internal structure, relations to other variables, and consequences) and graded the degree of evidence, with a maximum score of 15. We assessed educational utility using an Accreditation Council for Graduate Medical Education framework and methodological quality using the Medical Education Research Quality Instrument (MERSQI). Results: From 10,841 records, we identified 27 studies representing 13 assessment tools (10 adult, 2 pediatric, 1 both). All tools assessed technical skills, while 10 assessed cognitive and integrative skills. Validity evidence scores ranged from 1-15. The Assessment of Competency in Endoscopy (ACE) tool, the Direct Observation of Procedural Skills (DOPS) tool, and the Gastrointestinal Endoscopy Competency Assessment Tool (GiECAT) had the strongest validity evidence, with scores of 13, 15, and 14, respectively. Most tools were easy to use and interpret and required minimal resources. MERSQI scores ranged from 9.5-11.5 (maximum score 14.5). Conclusions: The ACE, DOPS, and GiECAT have strong validity evidence compared to other assessments. Future studies should identify barriers to widespread implementation and report on use of these tools in credentialing examinations.


2017 ◽  
Vol 9 (4) ◽  
pp. 473-478 ◽  
Author(s):  
Glenn Rosenbluth ◽  
Natalie J. Burman ◽  
Sumant R. Ranji ◽  
Christy K. Boscardin

ABSTRACT Background  Improving the quality of health care and education has become a mandate at all levels within the medical profession. While several published quality improvement (QI) assessment tools exist, all have limitations in addressing the range of QI projects undertaken by learners in undergraduate medical education, graduate medical education, and continuing medical education. Objective  We developed and validated a tool to assess QI projects with learner engagement across the educational continuum. Methods  After reviewing existing tools, we interviewed local faculty who taught QI to understand how learners were engaged and what these faculty wanted in an ideal assessment tool. We then developed a list of competencies associated with QI, established items linked to these competencies, revised the items using an iterative process, and collected validity evidence for the tool. Results  The resulting Multi-Domain Assessment of Quality Improvement Projects (MAQIP) rating tool contains 9 items, with criteria that may be completely fulfilled, partially fulfilled, or not fulfilled. Interrater reliability was 0.77. Untrained local faculty were able to use the tool with minimal guidance. Conclusions  The MAQIP is a 9-item, user-friendly tool that can be used to assess QI projects at various stages and to provide formative and summative feedback to learners at all levels.


2021 ◽  
Vol 7 ◽  
Author(s):  
Jamal Al-Qawasmi ◽  
Muhammad Saeed ◽  
Omar S. Asfour ◽  
Adel S. Aldosary

Urban quality of life (QOL) is a complex and multidimensional concept. A wide range of urban QOL assessment tools has been developed worldwide to measure and monitor the quality of urban life taking into account the particular conditions of cities/regions and the needs of their residents. This study aims to develop an urban QOL assessment tool appropriate for the context of Saudi Arabia (SA). For this purpose, this study developed and used a structured approach that consists of an in-depth analysis of 21 urban QOL assessment tools in use worldwide, combined with focus group analysis and feedback from a panel of experts. The results revealed that there is a lack of consensus among the existing tools regarding the usage of QOL indicators and domains, and that the majority of the tools demonstrate a lack of proper coverage of QOL subdomains. The results also show wide variations in the number of indicators used and that most of the examined tools are using objective measurable indicators. This study has identified 67 indicators distributed across 13 domains that constitute the core criteria of the proposed QOL assessment tool. The selected indicators and domains cover all the attributes of urban QOL and are evaluated by experts as important criteria to assess/measure QOL. Moreover, the results demonstrate the advantage of the developed framework and comprehensive list of criteria (CLC) as a structured and efficient approach to design better QOL assessment tools.


2020 ◽  
Vol 12 (4) ◽  
pp. 447-454
Author(s):  
Cristina E. Welch ◽  
Melissa M. Carbajal ◽  
Shelley Kumar ◽  
Satid Thammasitboon

ABSTRACT Background Recent studies showed that psychological safety is important to resident perception of the work environment, and improved psychological safety improves resident satisfaction survey scores. However, there is no evidence in medical education literature specifically addressing relationships between psychological safety and learning behaviors or its impact on learning outcomes. Objective We developed and gathered validity evidence for a group learning environment assessment tool using Edmondson's Teaming Theory and Webb's Depth of Knowledge model as a theoretical framework. Methods In 2018, investigators developed the preliminary tool. The authors administered the resulting survey to neonatology faculty and trainees at Baylor College of Medicine morning report sessions and collected validity evidence (content, response process, and internal structure) to describe the instrument's psychometric properties. Results Between December 2018 and July 2019, 450 surveys were administered, and 393 completed surveys were collected (87% response rate). Exploratory factor analysis and confirmatory factor analysis testing the 3-factor measurement model of the 15-item tool showed acceptable fit of the hypothesized model with standardized root mean square residual = 0.034, root mean square error approximation = 0.088, and comparative fit index = 0.987. Standardized path coefficients ranged from 0.66 to 0.97. Almost all absolute standardized residual correlations were less than 0.10. Cronbach's alpha scores showed internal consistency of the constructs. There was a high correlation among the constructs. Conclusions Validity evidence suggests the developed group learning assessment tool is a reliable instrument to assess psychological safety, learning behaviors, and learning outcomes during group learning sessions such as morning report.


2017 ◽  
Vol 4 (1) ◽  
pp. 8-15
Author(s):  
Fahmida Zabin ◽  
Soofia Khatoon ◽  
Md Humayun Kabir Talukder

Assessment plays a major role in the process of medical education. The clinical examination plays a key role in the assessment of students' competence to practice medicine. The conventional method of assessment does not include the assessment of clinical psychomotor skill, which students are learning throughout their clinical postings .The judgement of students performances is purely subjective and the same performance is graded differently by different examiners. A major advances in this area has been the formulation of an objective structured clinical/practical examination (OSCE/OSCE), which has been implemented successfully. The objectives of this study were to evaluate OSPE in FCPS part II exam in Obs and Gynae conducted by Bangladesh College of Physicians and Surgeons (BCPS) in terms of exploration of quality of assessment tools as well as the opinion of the students and the examiners regarding OSPE. A cross-sectional observational study was conducted among 150 students appearing in FCPS part II examination in Obs and Gynae of Bangladesh College of Physicians and Surgeons . Among the examiners 30 were included in the study. Study instrument consisted of two sets of self- administered questionnaires, one for the students and one for the examiners. Attitudes of students and examiners were collected by a grading scale (Likert scale). Different stations were analyzed by criteria of content coverage, skills assessed, clarity of languages, dominant domain assessed and time allocated for the task. By analyzing the results of three sessions it shows that the percentage of pass varies from ten to fourteen percent .The overall opinion regarding OSPE shows that 54 percent of the students strongly agreed that this assessment system is a comprehensive one. About thirty seven percent and forty percent of students agreed on that it can assess wide range of knowledge and clinical competencies respectively .However student felt that clinical competencies respectively .However student felt that it was a strong anxiety producing experience. And concerns were expressed regarding the ambiguity of some questions and inadequacy of time for expected tasks in some stations. The majority of the students felt they were well oriented about the exam and that the required tasks were consistent with the actual curriculum that they were taught. Ninety percent of examiners strongly agreed that it is a comprehensive system of assessment. Majority strongly agreed that it's a useful method of identifying gaps and weaknesses in competencies. About seventy percent agreed that checklists were well prepared. But forty percent had the opinion that there was some ambiguity of languages and ambiguity of instructions. OSPE quality evaluation showed coverage of content was adequate. Most of the domain assessed was psychomotor domain. In conclusion, both students and examiners agreed that the assessment was comprehensive and it was an objective and fair process. Overall opinions suggested that OSPE were restrictive, non discriminative and simplistic. The very nature of OSPE made it different from the traditional method. It was very much comprehensive and valid, covers a wide range of contents including practical skills.Bangladesh Journal of Medical Education Vol.4(1) 2013: 8-15


2020 ◽  
Vol 17 ◽  
Author(s):  
Anthony Clement Smith ◽  
Ann Framp ◽  
Patrea Andersen

Introduction With the recent introduction of registration for paramedics, and an absence of assessment tools that align undergraduate paramedic student practice to competency standards, this pilot study undertook to develop and evaluate a competency assessment tool designed to provide a standardised approach to student competency assessment. This paper reports the first part of a two-part enquiry evaluating the efficacy of the Australasian Paramedic Competency Assessment Tool (APCAT) to assess the practice competency of undergraduate paramedic students. Methods With a focus on gathering professional opinion to evaluate the usability of the tool and inform its development, a mixed methods methodology including a survey and open-ended questions were used to gather data from paramedic educators and on-road assessors in Australia and New Zealand. Data were analysed using descriptive statistics and content analysis. Results The outcome of the evaluation was positive, indicating that 81% agreed or strongly agreed that the tool was user-friendly; 71% believed that expectations of student performance and the grading system was clear; 70% found year level descriptors reflected practice expectations; and 66% believed that the resource manual provided adequate guidance. Conclusion The APCAT is simple and aligns student practice expectations with competency standards. Results indicate the support for a consistent approach for assessment of undergraduate paramedic student competence. Further research will be undertaken to determine the efficacy of using this tool to assess students in the clinical setting.


2017 ◽  
Vol 8 (1) ◽  
pp. e106-122 ◽  
Author(s):  
Isabelle N Colmers-Gray ◽  
Kieran Walsh ◽  
Teresa M Chan

Background: Competency-based medical education is becoming the new standard for residency programs, including Emergency Medicine (EM). To inform programmatic restructuring, guide resources and identify gaps in publication, we reviewed the published literature on types and frequency of resident assessment.Methods: We searched MEDLINE, EMBASE, PsycInfo and ERIC from Jan 2005 - June 2014. MeSH terms included “assessment,” “residency,” and “emergency medicine.” We included studies on EM residents reporting either of two primary outcomes: 1) assessment type and 2) assessment frequency per resident. Two reviewers screened abstracts, reviewed full text studies, and abstracted data. Reporting of assessment-related costs was a secondary outcome.Results: The search returned 879 articles; 137 articles were full-text reviewed; 73 met inclusion criteria. Half of the studies (54.8%) were pilot projects and one-quarter (26.0%) described fully implemented assessment tools/programs. Assessment tools (n=111) comprised 12 categories, most commonly: simulation-based assessments (28.8%), written exams (28.8%), and direct observation (26.0%). Median assessment frequency (n=39 studies) was twice per month/rotation (range: daily to once in residency). No studies thoroughly reported costs.Conclusion: EM resident assessment commonly uses simulation or direct observation, done once-per-rotation. Implemented assessment systems and assessment-associated costs are poorly reported. Moving forward, routine publication will facilitate transitioning to competency-based medical education.


Endoscopy ◽  
2018 ◽  
Vol 50 (08) ◽  
pp. 770-778 ◽  
Author(s):  
Keith Siau ◽  
Paul Dunckley ◽  
Roland Valori ◽  
Mark Feeney ◽  
Neil Hawkes ◽  
...  

Abstract Background Direct Observation of Procedural Skills (DOPS) is an established competence assessment tool in endoscopy. In July 2016, the DOPS scoring format changed from a performance-based scale to a supervision-based scale. We aimed to evaluate the impact of changes to the DOPS scale format on the distribution of scores in novice trainees and on competence assessment. Methods We performed a prospective, multicenter (n = 276), observational study of formative DOPS assessments in endoscopy trainees with ≤ 100 lifetime procedures. DOPS were submitted in the 6-months before July 2016 (old scale) and after (new scale) for gastroscopy (n = 2998), sigmoidoscopy (n = 1310), colonoscopy (n = 3280), and polypectomy (n = 631). Scores for old and new DOPS were aligned to a 4-point scale and compared. Results 8219 DOPS (43 % new and 57 % old) submitted for 1300 trainees were analyzed. Compared with old DOPS, the use of the new DOPS was associated with greater utilization of the lowest score (2.4 % vs. 0.9 %; P < 0.001), broader range of scores, and a reduction in competent scores (60.8 % vs. 86.9 %; P < 0.001). The reduction in competent scores was evident on subgroup analysis across all procedure types (P < 0.001) and for each quartile of endoscopy experience. The new DOPS was superior in characterizing the endoscopy learning curve by demonstrating progression of competent scores across quartiles of procedural experience. Conclusions Endoscopy assessors applied a greater range of scores using the new DOPS scale based on degree of supervision in two cohorts of trainees matched for experience. Our study provides construct validity evidence in support of the new scale format.


2012 ◽  
Vol 36 (1) ◽  
pp. 75 ◽  
Author(s):  
Meredith J. McIntyre ◽  
Alison M. Patrick ◽  
Linda K. Jones ◽  
Michelle Newton ◽  
Helen McLachlan ◽  
...  

To address workforce shortages, the Australian Government funded additional nursing and midwifery places in 2009 pre-registration courses. An existing deficit in midwifery clinical placements, combined with the need to secure additional clinical placements, contributed to a serious shortfall. In response, a unique collaboration between Midwifery Academics of Victoria (MIDAC), rural and metropolitan maternity managers (RMM and MMM) groups and Department of Health (DOH) Victoria was generated, in order to overcome difficulties experienced by maternity services in meeting the increased need. This group identified the large number of different clinical assessment tools required to be being completed by midwives supervising students as problematic. It was agreed that the development of a Common Assessment Tool (CAT) for use in clinical assessment across all pre-registration midwifery courses in Victoria had the potential to reduce workload associated with student assessments and, in doing so, release additional placements within each service. The CAT was developed in 2009 and implemented in 2010. The unique collaboration involved in the development of the CAT is a blueprint for future projects. The collaboration on this project provided a range of benefits and challenges, as well as unique opportunities for further collaborations involving industry, government, regulators and the tertiary sector. What is known about this topic? In response to current and predicted workforce shortages, the Australian Government funded additional midwifery places in pre-registration midwifery courses in 2009, creating the need for additional midwifery student clinical placements. Victorian midwifery service providers experienced difficulty in the supply of the additional placements requested, due to complex influences constraining clinical placement opportunities; one of these was the array of assessment tools being used by students on clinical placements. What does this paper add? A collaborative partnership between MIDAC, RMM and MMM groups, and the DOH identified a range of problems affecting the ability of midwifery services to increase clinical placements. The workload burden attached to the wide range of clinical assessment tools required to be completed by the supervising midwife for each placement was identified as the most urgent problem requiring resolution. The collaborative partnership approach facilitated the development of a CAT capable of meeting the needs of all key stakeholders. What are the implications for managers and policy makers? Using a collaborative partnership workshop approach, the development of a clear project focus where all participants understood the outcome required was achieved. This collaboration occurred at multiple levels with support from the DOH and was key to the success of the project. The approach strengthens problem solving in situations complicated by competing influences, a common occurrence in health service delivery, and where unilateral approaches have not or are unlikely to succeed.


BMJ Open ◽  
2020 ◽  
Vol 10 (2) ◽  
pp. e034468 ◽  
Author(s):  
Nicholas Holt ◽  
Kirsty Crowe ◽  
Daniel Lynagh ◽  
Zoe Hutcheson

BackgroundPoor communication between healthcare professionals is recognised as accounting for a significant proportion of adverse patient outcomes. In the UK, the General Medical Council emphasises effective handover (handoff) as an essential outcome for medical graduates. Despite this, a significant proportion of medical schools do not teach the skill.ObjectivesThis study had two aims: (1) demonstrate a need for formal handover training through assessing the pre-existing knowledge, skills and attitudes of medical students and (2) study the effectiveness of a pilot educational handover workshop on improving confidence and competence in structured handover skills.DesignStudents underwent an Objective Structured Clinical Examination style handover competency assessment before and after attending a handover workshop underpinned by educational theory. Participants also completed questionnaires before and after the workshop. The tool used to measure competency was developed through a modified Delphi process.SettingMedical education departments within National Health Service (NHS) Lanarkshire hospitals.ParticipantsForty-two undergraduate medical students rotating through their medical and surgical placements within NHS Lanarkshire enrolled in the study. Forty-one students completed all aspects.Main outcome measuresPaired questionnaires, preworkshop and postworkshop, ascertained prior teaching and confidence in handover skills. The questionnaires also elicited the student’s views on the importance of handover and the potential effects on patient safety. The assessment tool measured competency over 12 domains.ResultsEighty-three per cent of participants reported no previous handover teaching. There was a significant improvement, p<0.0001, in confidence in delivering handovers after attending the workshop. Student performance in the handover competency assessment showed a significant improvement (p<0.05) in 10 out of the 12 measured handover competency domains.ConclusionsA simple, robust and reproducible intervention, underpinned by medical education theory, can significantly improve competence and confidence in medical handover. Further research is required to assess long-term outcomes as student’s transition from undergraduate to postgraduate training.


Sign in / Sign up

Export Citation Format

Share Document