scholarly journals Assessing professional competence in optometry – a review of the development and validity of the written component of the competency in optometry examination (COE)

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
S. Backhouse ◽  
N. G. Chiavaroli ◽  
K. L. Schmid ◽  
T. McKenzie ◽  
A. L. Cochrane ◽  
...  

Abstract Background Credentialing assessment for overseas-educated optometrists seeking registration in Australia and New Zealand is administered by the Optometry Council of Australia and New Zealand. The aim was to review the validation and outcomes of the written components of this exam to demonstrate credentialing meets entry-level competency standards. Methods The Competency in Optometry Examination consists of two written and two clinical parts. Part 1 of the written exam comprises multiple choice questions (MCQ) covering basic and clinical science, while Part 2 has 18 short answer questions (SAQ) examining diagnosis and management. Candidates must pass both written components to progress to the clinical exam. Validity was evaluated using Kane’s framework for scoring (marking criteria, item analysis), generalization (blueprint), extrapolation (standard setting), and implications (outcome, including pass rates). A competency-based blueprint, the Optometry Australia Entry-level Competency Standards for Optometry 2014, guided question selection with the number of items weighted towards key competencies. A standard setting exercise, last conducted in 2017, was used to determine the minimum standard for both written exams. Item response theory (Rasch) was used to analyse exams, produce reliability metrics, apply consistent standards to the results, calibrate difficulty across exams, and score candidates. Results Data is reported on 12 administrations of the written examination since 2014. Of the 193 candidates who sat the exam over the study period, 133 (68.9%) passed and moved on to the practical component. Ninety-one (47.2%) passed both the MCQ and SAQ exams on their first attempt. The MCQ exam has displayed consistently high reliability (reliability index range 0.71 to 0.93, average 0.88) across all 12 administrations. Prior to September 2017 the SAQ had a set cutscore of 50%, and the difficulty of the exam was variable. Since the introduction of Rasch analysis to calibrate difficulty across exams, the reliability and power of the SAQ exam has been consistently high (separation index range 0.82 to 0.93, average 0.86). Conclusions The findings from collective evidence support the validity of the written components (MCQ and SAQ) of the credentialing of the competency of overseas-educated optometrists in Australia and New Zealand.

2021 ◽  
Vol 21 (1) ◽  
Author(s):  
David Hope ◽  
David Kluth ◽  
Matthew Homer ◽  
Avril Dewar ◽  
Richard Fuller ◽  
...  

Abstract Background Due to differing assessment systems across UK medical schools, making meaningful cross-school comparisons on undergraduate students’ performance in knowledge tests is difficult. Ahead of the introduction of a national licensing assessment in the UK, we evaluate schools’ performances on a shared pool of “common content” knowledge test items to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. Methods We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty “best of five” multiple choice ‘common content’ items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants, creating a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a “like-for-like” comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. Results Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen’s d around 1) effects. A passing standard that would see 5 % of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40 %, whereas a passing standard that would see 5 % of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. Conclusions Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standards that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools – despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are “correct” as judged by experts, large institutional differences in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.


2018 ◽  
Vol 12 (2) ◽  
pp. 88 ◽  
Author(s):  
Flavio Tangianu ◽  
Antonino Mazzone ◽  
Franco Berti ◽  
Giuliano Pinna ◽  
Irene Bortolotti ◽  
...  

There are many feasible tools for the assessment of clinical practice, but there is a wide consensus on the fact that the simultaneous use of several different methods could be strategic for a comprehensive overall judgment of clinical competence. Multiple-choice questions (MCQs) are a well-established reliable method of assessing knowledge. Constructing effective MCQ tests and items requires scrupulous care in the design, review and validation stages. Creating high-quality multiple-choice questions requires a very deep experience, knowledge and large amount of time. Hereby, after reviewing their construction, strengths and limitations, we debate their completeness for the assessment of professional competence.


2020 ◽  
Author(s):  
J Wailling ◽  
Brian Robinson ◽  
M Coombs

© 2018 John Wiley & Sons Ltd Aim: This study explored how doctors, nurses and managers working in a New Zealand tertiary hospital understand patient safety. Background: Despite health care systems implementing proven safety strategies from high reliability organisations, such as aviation and nuclear power, these have not been uniformly adopted by health care professionals with concerns raised about clinician engagement. Design: Instrumental, embedded case study design using qualitative methods. Methods: The study used purposeful sampling, and data was collected using focus groups and semi-structured interviews with doctors (n = 31); registered nurses (n = 19); and senior organisational managers (n = 3) in a New Zealand tertiary hospital. Results: Safety was described as a core organisational value. Clinicians appreciated proactive safety approaches characterized by anticipation and vigilance, where they expertly recognized and adapted to safety risks. Managers trusted evidence-based safety rules and approaches that recorded, categorized and measured safety. Conclusion and Implications for Nursing Management: It is important that nurse managers hold a more refined understanding about safety. Organisations are more likely to support safe patient care if cultural complexity is accounted for. Recognizing how different occupational groups perceive and respond to safety, rather than attempting to reinforce a uniform set of safety actions and responsibilities, is likely to bring together a shared understanding of safety, build trust and nurture safety culture.


Author(s):  
Netravathi B. Angadi ◽  
Amitha Nagabhushana ◽  
Nayana K. Hashilkar

Background: Multiple choice questions (MCQs) are a common method of assessment of medical students. The quality of MCQs is determined by three parameters such as difficulty index (DIF I), discrimination index (DI), and Distractor efficiency (DE). Item analysis is a valuable yet relatively simple procedure, performed after the examination that provides information regarding the reliability and validity of a test item. The objective of this study was to perform an item analysis of MCQs for testing their validity parameters.Methods: 50 items consisting of 150 distractors were selected from the formative exams. A correct response to an item was awarded one mark with no negative marking for incorrect response. Each item was analysed for three parameters such as DIF I, DI, and DE.Results: A total of 50 items consisting of 150 Distractor s were analysed. DIF I of 31 (62%) items were in the acceptable range (DIF I= 30-70%) and 30 had ‘good to excellent’ (DI >0.25). 10 (20%) items were too easy and 9 (18%) items were too difficult (DIF I <30%). There were 4 items with 6 non-functional Distractor s (NFDs), while the rest 46 items did not have any NFDs.Conclusions: Item analysis is a valuable tool as it helps us to retain the valuable MCQs and discard or modify the items which are not useful. It also helps in increasing our skills in test construction and identifies the specific areas of course content which need greater emphasis or clarity.


Author(s):  
Zilola Uralovna Kurbonova ◽  

The article describes the content of the concepts of technological competence, competency-based approach, competence, professional competence of the educator of preschool education.


2016 ◽  
Vol 3 (1) ◽  
pp. 176 ◽  
Author(s):  
Marcia Pilgrim ◽  
Garry Hornby

<p>The focus of this article is to discuss the issue of teacher preparation for special and inclusive education in the English speaking Caribbean. The article suggests how teacher preparation for special and inclusive education in the Caribbean could be improved by the implementation of a competency-based, e-learning training program that was developed in New Zealand. The New Zealand training program is described and a brief summary of findings of a study evaluating the effectiveness of the program is presented. Finally, the article highlights how the New Zealand program can be translated into the Caribbean context.</p>


2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Maxine Te ◽  
Felicity Blackstock ◽  
Caroline Fryer ◽  
Peter Gardner ◽  
Louise Geary ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document