scholarly journals Performance of candidates disclosing dyslexia with other candidates in a UK medical licensing examination: cross-sectional study

2018 ◽  
Vol 94 (1110) ◽  
pp. 198-203 ◽  
Author(s):  
Zahid B Asghar ◽  
Aloysius Niroshan Siriwardena ◽  
Chris Elfes ◽  
Jo Richardson ◽  
James Larcombe ◽  
...  

Purpose of the studyThe aim of this study was to compare performance of candidates who declared an expert-confirmed diagnosis of dyslexia with all other candidates in the Applied Knowledge Test (AKT) of the Membership of the Royal College of General Practitioners licensing examination.Study designWe used routinely collected data from candidates who took the AKT on one or more occasions between 2010 and 2015. Multivariate logistic regression was used to analyse performance of candidates who declared dyslexia with all other candidates, adjusting for candidate characteristics known to be associated with examination success including age, sex, ethnicity, country of primary medical qualification, stage of training, number of attempts and time spent completing the test.ResultsThe analysis included data from 14 examinations involving 14 801 candidates of which 2.6% (379/14 801) declared dyslexia. The pass rate for candidates who declared dyslexia was 83.6% compared with 95.0% for other candidates. After adjusting for covariates linked to examination success including age, sex, ethnicity, country of primary medical qualification, stage of training, number of attempts and time spent completing the test dyslexia was not significantly associated with pass rates in the AKT. Candidates declaring dyslexia after initially failing the AKT were more likely to have a primary medical qualification outside the UK.ConclusionsPerformance was similar in AKT candidates disclosing dyslexia with other candidates once covariates associated with examination success were adjusted for. Candidates declaring dyslexia after initially failing the AKT were more likely to have a primary medical qualification outside the UK.

Author(s):  
Rachel B. Levine ◽  
Andrew P. Levy ◽  
Robert Lubin ◽  
Sarah Halevi ◽  
Rebeca Rios ◽  
...  

Purpose: United States (US) and Canadian citizens attending medical school abroad often desire to return to the US for residency, and therefore must pass US licensing exams. We describe a 2-day United States Medical Licensing Examination (USMLE) step 2 clinical skills (CS) preparation course for students in the Technion American Medical School program (Haifa, Israel) between 2012 and 2016.Methods: Students completed pre- and post-course questionnaires. The paired t-test was used to measure students’ perceptions of knowledge, preparation, confidence, and competence in CS pre- and post-course. To test for differences by gender or country of birth, analysis of variance was used. We compared USMLE step 2 CS pass rates between the 5 years prior to the course and the 5 years during which the course was offered.Results: Ninety students took the course between 2012 and 2016. Course evaluations began in 2013. Seventy-three students agreed to participate in the evaluation, and 64 completed the pre- and post-course surveys. Of the 64 students, 58% were US-born and 53% were male. Students reported statistically significant improvements in confidence and competence in all areas. No differences were found by gender or country of origin. The average pass rate for the 5 years prior to the course was 82%, and the average pass rate for the 5 years of the course was 89%.Conclusion: A CS course delivered at an international medical school may help to close the gap between the pass rates of US and international medical graduates on a high-stakes licensing exam. More experience is needed to determine if this model is replicable.


2021 ◽  
Author(s):  
David Hope ◽  
David Kluth ◽  
Matthew Homer ◽  
Avril Dewar ◽  
Richard Fuller ◽  
...  

Abstract Background Due to the diverse approaches to medical school assessment, making meaningful cross-school comparisons on knowledge is difficult. Ahead of the introduction of national licensing assessment in the UK, we evaluate schools on “common content” to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. Methods We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty “best of five” multiple choice items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants for a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a “like-for-like” comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. Results Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen’s d around 1) effects. A passing standard that would see 5% of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40%, whereas a passing standard that would see 5% of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. Conclusions Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standard setting approaches that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools – despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are “correct” as judged by experts, large institutional gaps in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.


2020 ◽  
Vol 20 (1) ◽  
Author(s):  
Xinxin Han ◽  
Xiaotong Li ◽  
Liang Cheng ◽  
Zhuoqing Wu ◽  
Jiming Zhu

Abstract Background To evaluate the performance of China’s new medical licensing examination (MLE) for rural general practice, which determines the number of qualified doctors who can provide primary care for China’s rural residents, and to identify associated factors. Methods Data came from all 547 examinees of the 2017 MLE for rural general practice in Hainan province, China. Overall pass rates of the MLE and pass rates of the MLE Step 1 practical skills examination and Step 2 written exam were examined. Chi-square tests and multivariable logistic regression were used to identify examinee characteristics associated with passing Step 1 and Step 2, respectively. Results Of the 547 examinees, 68% passed Step 1, while only 23% of Step 1 passers passed Step 2, yielding an 15% (82 of 547) overall pass rate of the whole examination. Junior college medical graduates were 2.236 (95% CI, 1.127–4.435) times more likely to pass Step 1 than secondary school medical graduates. Other characteristics, including age, gender, forms of study and years of graduation, were also significantly associated with passing Step 1. In contrast, examinees’ vocational school major and Step 1 score were the only two significant predictors of passing Step 2. Conclusions Our study reveals a low pass rate of China’s new MLE for rural general practice in Hainan province, indicating a relatively weak competency of graduates from China’s alternative medical education. An effective long-term solution might be to improve examinees’ clinical competency through mandating residency training for graduates of China’s alternative medical education.


2021 ◽  
Vol 21 (1) ◽  
Author(s):  
David Hope ◽  
David Kluth ◽  
Matthew Homer ◽  
Avril Dewar ◽  
Richard Fuller ◽  
...  

Abstract Background Due to differing assessment systems across UK medical schools, making meaningful cross-school comparisons on undergraduate students’ performance in knowledge tests is difficult. Ahead of the introduction of a national licensing assessment in the UK, we evaluate schools’ performances on a shared pool of “common content” knowledge test items to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. Methods We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty “best of five” multiple choice ‘common content’ items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants, creating a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a “like-for-like” comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. Results Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen’s d around 1) effects. A passing standard that would see 5 % of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40 %, whereas a passing standard that would see 5 % of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. Conclusions Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standards that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools – despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are “correct” as judged by experts, large institutional differences in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.


2018 ◽  
Vol 5 (4) ◽  
pp. 84
Author(s):  
Katie Waine ◽  
Rachel S. Dean ◽  
Chris Hudson ◽  
Jonathan Huxley ◽  
Marnie L. Brennan

Clinical audit is a quality improvement tool used to assess and improve the clinical services provided to patients. This is the first study to investigate the extent to which clinical audit is understood and utilised in farm animal veterinary practice. A cross-sectional study to collect experiences and attitudes of farm animal veterinary surgeons in the UK towards clinical audit was conducted using an online nationwide survey. The survey revealed that whilst just under three-quarters (n = 237/325; 73%) of responding veterinary surgeons had heard of clinical audit, nearly 50% (n = 148/301) had never been involved in a clinical audit of any species. The participants’ knowledge of what a clinical audit was varied substantially, with many respondents reporting not receiving training on clinical audit at the undergraduate or postgraduate level. Respondents that had participated in a clinical audit suggested that protected time away from clinical work was required for the process to be completed successfully. This novel study suggests that clinical audit is undertaken to some extent in farm animal practice and that practitioner perception is that it can bring benefits, but was felt that more resources and support were needed for it to be implemented successfully on a wider scale.


BMJ Open ◽  
2016 ◽  
Vol 6 (8) ◽  
pp. e010551 ◽  
Author(s):  
Clare Quigley ◽  
Cristina Taut ◽  
Tamara Zigman ◽  
Louise Gallagher ◽  
Harry Campbell ◽  
...  

2018 ◽  
Vol 184 (5) ◽  
pp. 153-153 ◽  
Author(s):  
Gwen M Rees ◽  
David C Barrett ◽  
Henry Buller ◽  
Harriet L Mills ◽  
Kristen K Reyher

Prescription veterinary medicine (PVM) use in the UK is an area of increasing focus for the veterinary profession. While many studies measure antimicrobial use on dairy farms, none report the quantity of antimicrobials stored on farms, nor the ways in which they are stored. The majority of PVM treatments occur in the absence of the prescribing veterinarian, yet there is an identifiable knowledge gap surrounding PVM use and farmer decision making. To provide an evidence base for future work on PVM use, data were collected from 27 dairy farms in England and Wales in Autumn 2016. The number of different PVMs stored on farms ranged from 9 to 35, with antimicrobials being the most common therapeutic group stored. Injectable antimicrobials comprised the greatest weight of active ingredient found, while intramammary antimicrobials were the most frequent unit of medicine stored. Antimicrobials classed by the European Medicines Agency as critically important to human health were present on most farms, and the presence of expired medicines and medicines not licensed for use in dairy cattle was also common. The medicine resources available to farmers are likely to influence their treatment decisions; therefore, evidence of the PVM stored on farms can help inform understanding of medicine use.


Sign in / Sign up

Export Citation Format

Share Document