scholarly journals Calculated grades, predicted grades, forecasted grades and actual A-level grades: Reliability, correlations and predictive validity in medical school applicants, undergraduates, and postgraduates in a time of COVID-19

Author(s):  
Ian Christopher McManus ◽  
Katherine Woolf ◽  
Dave Harrison ◽  
Paul Tiffin ◽  
Lewis Paton ◽  
...  

Calculated A-level grades will replace actual, attained A-levels and other Key Stage 5 qualifications in 2020 in the UK as a result of the COVID 19 pandemic. This paper assesses the likely consequences for medical schools in particular, beginning with an overview of the research literature on predicted grades, concluding that calculated grades are likely to correlate strongly with the predicted grades that schools currently provide on UCAS applications. A notable absence from the literature is evidence on whether predicted grades are better or worse than actual grades in predicting university outcomes. This paper provides such evidence on the reduced predictive validity of predicted A-level grades in comparison with actual A-level grades. The present study analyses the extensive data on predicted and actual grades which are available in UKMED (United Kingdom Medical Education Database), a large-scale administrative dataset containing longitudinal data from medical school application, through undergraduate and then postgraduate training. In particular, predicted A-level grades as well as actual A-level grades are available, along with undergraduate outcomes and postgraduate outcomes which can be used to assess predictive validity of measures collected at selection. This study looks at two UKMED datasets. In the first dataset we compare actual and predicted A-level grades in 237,030 A-levels taken by medical school applicants between 2010 and 2018. 48.8% of predicted grades were accurate, grades were over-predicted in 44.7% of cases and under-predicted in 6.5% of cases. Some A-level subjects, General Studies in particular, showed a higher degree of over-estimation. Similar over-prediction was found for Extended Project Qualifications, and for SQA Advanced Highers. The second dataset considered 22,150 18-year old applicants to medical school in 2010 to 2014, who had both predicted and actual A-level grades. 12,600 students entered medical school and had final year outcomes available. In addition there were postgraduate outcomes for 1,340 doctors. Undergraduate outcomes are predicted significantly better by actual, attained A-level grades than by predicted A-level grades, as is also the case for postgraduate outcomes. Modelling the effect of selecting only on calculated grades suggests that because of the lesser predictive ability of predicted grades, medical school cohorts for the 2020 entry year are likely to under-attain, with 13% more gaining the equivalent of the current lowest decile of performance, and 16% fewer gaining the equivalent of the current top decile, effects which are then likely to follow through into postgraduate training. The problems of predicted/calculated grades can to some extent, although not entirely, be ameliorated, by taking U(K)CAT, BMAT, and perhaps other measures into account to supplement calculated grades. Medical schools will probably also need to consider whether additional teaching is needed for entrants who are struggling, or might have missed out on important aspects of A-level teaching, with extra support being needed, so that standards are maintained.

BMJ Open ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. e047354
Author(s):  
I C McManus ◽  
Katherine Woolf ◽  
David Harrison ◽  
Paul A Tiffin ◽  
Lewis W Paton ◽  
...  

ObjectivesTo compare in UK medical students the predictive validity of attained A-level grades and teacher-predicted A levels for undergraduate and postgraduate outcomes. Teacher-predicted A-level grades are a plausible proxy for the teacher-estimated grades that replaced UK examinations in 2020 as a result of the COVID-19 pandemic. The study also models the likely future consequences for UK medical schools of replacing public A-level examination grades with teacher-predicted grades.DesignLongitudinal observational study using UK Medical Education Database data.SettingUK medical education and training.ParticipantsDataset 1: 81 202 medical school applicants in 2010–2018 with predicted and attained A-level grades. Dataset 2: 22 150 18-year-old medical school applicants in 2010–2014 with predicted and attained A-level grades, of whom 12 600 had medical school assessment outcomes and 1340 had postgraduate outcomes available.Outcome measuresUndergraduate and postgraduate medical examination results in relation to attained and teacher-predicted A-level results.ResultsDataset 1: teacher-predicted grades were accurate for 48.8% of A levels, overpredicted in 44.7% of cases and underpredicted in 6.5% of cases. Dataset 2: undergraduate and postgraduate outcomes correlated significantly better with attained than with teacher-predicted A-level grades. Modelling suggests that using teacher-estimated grades instead of attained grades will mean that 2020 entrants are more likely to underattain compared with previous years, 13% more gaining the equivalent of the lowest performance decile and 16% fewer reaching the equivalent of the current top decile, with knock-on effects for postgraduate training.ConclusionsThe replacement of attained A-level examination grades with teacher-estimated grades as a result of the COVID-19 pandemic may result in 2020 medical school entrants having somewhat lower academic performance compared with previous years. Medical schools may need to consider additional teaching for entrants who are struggling or who might need extra support for missed aspects of A-level teaching.


2018 ◽  
Vol 94 (1113) ◽  
pp. 374-380 ◽  
Author(s):  
Agnes Ayton ◽  
Ali Ibrahim

BackgroundEating disorders affect 1%–4% of the population and they are associated with an increased rate of mortality and multimorbidity. Following the avoidable deaths of three people the parliamentary ombudsman called for a review of training for all junior doctors to improve patient safety.ObjectiveTo review the teaching and assessment relating to eating disorders at all levels of medical training in the UK.MethodWe surveyed all the UK medical schools about their curricula, teaching and examinations related to eating disorders in 2017. Furthermore, we reviewed curricula and requirements for annual progression (Annual Review of Competence Progression (ARCP)) for all relevant postgraduate training programmes, including foundation training, general practice and 33 specialties.Main outcome measuresInclusion of eating disorders in curricula, time dedicated to teaching, assessment methods and ARCP requirements.ResultsThe medical school response rate was 93%. The total number of hours spent on eating disorder teaching in medical schools is <2 hours. Postgraduate training adds little more, with the exception of child and adolescent psychiatry. The majority of doctors are never assessed on their knowledge of eating disorders during their entire training, and only a few medical students and trainees have the opportunity to choose a specialist placement to develop their clinical skills.ConclusionsEating disorder teaching is minimal during the 10–16 years of undergraduate and postgraduate medical training in the UK. Given the risk of mortality and multimorbidity associated with these disorders, this needs to be urgently reviewed to improve patient safety.


2019 ◽  
Vol 69 (suppl 1) ◽  
pp. bjgp19X703685
Author(s):  
Eliot Rees ◽  
David Harrison ◽  
Karen Mattick ◽  
Katherine Woolf

BackgroundThe NHS is critically short of doctors. The sustainability of the UK medical workforce depends on medical schools producing more future GPs who are able and willing to care for under-served patient populations. The evidence for how medical schools should achieve this is scarce. We know medical schools vary in how they attract, select, and educate future doctors. We know some medical schools produce more GPs, but it is uncertain whether those school recruit more students who are interested in general practice.AimThis study seeks to explore how applicants’ future speciality ambitions influence their choice of medical school.MethodOne-to-one semi-structured interviews and focus groups were conducted with medical applicants and first year medical students at eight medical schools around the UK. Interviews were audiorecorded and transcribed verbatim. Transcripts were analysed through thematic analysis by one researcher. A sample of 20% of transcripts were analysed by a second researcher.ResultsSixty-six individuals participated in 61 individual interviews and one focus group. Interviews lasted a mean of 54 minutes (range 22–113). Twelve expressed interest in general practice, 40 favoured other specialities, and 14 were unsure. Participants’ priorities varied by speciality aspiration; those interested in general practice described favouring medical schools with early clinical experience and problem-based learning curricula, and were less concerned with cadaveric dissection and the prestige of the medical school.ConclusionMany applicants consider future speciality ambitions before applying to medical school. Speciality aspiration appears to influence priority of medical schools’ attributes.


2021 ◽  
Author(s):  
David Hope ◽  
David Kluth ◽  
Matthew Homer ◽  
Avril Dewar ◽  
Richard Fuller ◽  
...  

Abstract Background Due to the diverse approaches to medical school assessment, making meaningful cross-school comparisons on knowledge is difficult. Ahead of the introduction of national licensing assessment in the UK, we evaluate schools on “common content” to compare candidates at different schools and evaluate whether they would pass under different standard setting regimes. Such information can then help develop a cross-school consensus on standard setting shared content. Methods We undertook a cross-sectional study in the academic sessions 2016-17 and 2017-18. Sixty “best of five” multiple choice items were delivered each year, with five used in both years. In 2016-17 30 (of 31 eligible) medical schools undertook a mean of 52.6 items with 7,177 participants. In 2017-18 the same 30 medical schools undertook a mean of 52.8 items with 7,165 participants for a full sample of 14,342 medical students sitting common content prior to graduation. Using mean scores, we compared performance across items and carried out a “like-for-like” comparison of schools who used the same set of items then modelled the impact of different passing standards on these schools. Results Schools varied substantially on candidate total score. Schools differed in their performance with large (Cohen’s d around 1) effects. A passing standard that would see 5% of candidates at high scoring schools fail left low-scoring schools with fail rates of up to 40%, whereas a passing standard that would see 5% of candidates at low scoring schools fail would see virtually no candidates from high scoring schools fail. Conclusions Candidates at different schools exhibited significant differences in scores in two separate sittings. Performance varied by enough that standard setting approaches that produce realistic fail rates in one medical school may produce substantially different pass rates in other medical schools – despite identical content and the candidates being governed by the same regulator. Regardless of which hypothetical standards are “correct” as judged by experts, large institutional gaps in pass rates must be explored and understood by medical educators before shared standards are applied. The study results can assist cross-school groups in developing a consensus on standard setting future licensing assessment.


2021 ◽  
Vol 108 (Supplement_7) ◽  
Author(s):  
Ricky Ellis ◽  
Duncan Scrimgeour ◽  
Jennifer Cleland ◽  
Amanda Lee ◽  
Peter Brennan

Abstract Aims UK medical schools vary in their mission, curricula and pedagogy, but little is known of the effect of this on postgraduate examination performance. We explored differences in outcomes at the Membership of the Royal College of Surgeons examination (MRCS) between medical schools, course types, national ranking and candidate sociodemographic factors. Methods A retrospective longitudinal study of all UK medical graduates who attempted MRCS Part A (n = 9730) and MRCS Part B (n = 4645) between 2007 and 2017, utilising the UK Medical Education Database (https://www.ukmed.ac.uk). We examined the relationship between medical school and success at first attempt of the MRCS using univariate analysis. Logistic regression modelling was used to identify independent predictors of MRCS success. Results MRCS pass rates differed significantly between medical schools (P &lt; 0.001). Russell Group graduates were more likely to pass MRCS Part A (Odds Ratio (OR) 1.79 [95% Confidence Interval (CI) 1.56-2.05]) and Part B (OR 1.24 [1.03-1.49])).  Trainees from Standard-Entry 5-year programmes were more likely to pass MRCS at first attempt compared to those from extended (Gateway) courses, Part A OR 3.72 [2.69-5.15]; Part B (OR 1.67 [1.02-2.76]. Non-graduates entering medical school were more likely to pass Part A (OR 1.40 [1.19-1.64]) and Part B (OR 1.66 [1.24-2.24]) than graduate-entrants. Conclusion Medical school, course type and socio-demographic factors are associated with success on the MRCS. This information will help to identify surgical trainees at risk of failing the MRCS in order for schools of surgery to redistribute resources to those in need.


BJR|Open ◽  
2021 ◽  
Author(s):  
Cindy Chew ◽  
Patrick J O'Dwyer ◽  
David Young

Objectives: The UK has a shortage of Radiologists to meet the increasing demand for radiologic examinations. To encourage more medical students to consider Radiology as a career, increased exposure at undergraduate level has been advocated. The aim of this study was to evaluate if formal Radiology teaching hours at medical school had any association with the number of qualified Radiologists joining the General Medical Council Specialist Register. Methods: Total number of doctors joining the GMC Specialist Register as Clinical Radiologists, and those with a primary medical qualifications awarded in Scotland, was obtained from the GMC (2010–2020). Graduate numbers from all 4 Scottish Medical Schools (2000–2011) were also obtained. Hours of Radiology teaching for medical schools in Scotland were obtained from validated AToMS study. Results: Two hundred and twenty three (6.6%) of 3347 Radiologists added to the GMC Specialist Register between 2010 and 2020 received their primary medical qualification (PMQ) from Scottish Universities. The number of Radiologists from Scottish Universities joining the GMC specialist register was 2.6% of the total number of Scottish Medical Graduates. There was no association between the number of hours (Range 1–30) Radiology was taught to medical students and the number that joined the specialist register as Radiologists (p = 0.54 chi square trend). Conclusion: Increased exposure to Radiology teaching does not influence medical students’ decision to take up Radiology as a career. While continued Radiology exposure remains important, other strategies are required in both the short and long term to ensure radiology services are maintained without detriment to patients. Advances in knowledge: Increased hours of Radiology teaching in medical school was not associated with increased radiologists joining the profession.


Author(s):  
David Metcalfe ◽  
Harveer Dev

SJTs are commonly used by organizations for personnel selection. They aim to provide realistic, but hypothetical, scenarios and possible answers which are either selected or ranked by the candidate. One such test will contribute half, or significantly more than half, the score used by applicants to the UK Foundation Programme. The test will involve a single paper over two hours and twenty minutes in which candidates will answer 70 questions. This equates to approximately two minutes per question. Your response to 60 questions will be included in your final score, while ten questions embedded throughout the test will be pilot questions which are designed to be validated but not counted in your final score. You will not be able to differentiate pilot from genuine test questions and should answer every question as if it ‘counts’. In one SJT pilot, 96% of candidates finished the test within two hours, which provides some indication about the time pressure. It is important to answer all questions and not simply ‘guess’ those left at the end. Although the SJT is not negatively marked, random guesses are not allocated points. The scoring software will identify guesses by looking for unusual or sporadic answer patterns. The SJT will be held locally by individual medical schools under invigilated conditions. Therefore, your medical school should be in touch about specific local arrangements. Each SJT paper will include a selection of questions, each mapped to a specific professional attribute. Questions should be evenly distributed between attributes and between scenario type, i.e. ‘patient’, ‘colleague’, or ‘personal’. The SJT will include two types of question: ● multiple choice questions (approximately one- third) ● ranking questions (approximately two- thirds). These begin with a scenario and provide eight possible answers. Three of these are correct and should be selected. The remaining five are incorrect. The example in Box 2.1 provides an illustrative medical school scenario. For questions based around Foundation Programme scenarios, over 100 examples are provided for practice from onwards.


2014 ◽  
Vol 14 (1) ◽  
Author(s):  
Adrian Husbands ◽  
Alistair Mathieson ◽  
Jonathan Dowell ◽  
Jennifer Cleland ◽  
Rhoda MacKenzie

2020 ◽  
Vol 134 (6) ◽  
pp. 553-557
Author(s):  
A W Mayer ◽  
K A Smith ◽  
S Carrie ◽  

AbstractBackgroundENT presentations are prevalent in clinical practice but feature little in undergraduate curricula. Consequently, most medical graduates are not confident managing common ENT conditions. In 2014, the first evidence-based ENT undergraduate curriculum was published to guide medical schools.ObjectiveTo assess the extent that current UK medical school learning outcomes correlate with the syllabus of the ENT undergraduate curriculum.MethodTwo students from each participating medical school independently reviewed all ENT-related curriculum documents to determine whether learning outcomes from the suggested curriculum were met.ResultsSixteen of 34 curricula were reviewed. Only a minority of medical schools delivered teaching on laryngectomy or tracheostomy, nasal packing or cautery, and ENT medications or surgical procedures.ConclusionThere is wide variability in ENT undergraduate education in UK medical schools. Careful consideration of which topics are prioritised, and the teaching modalities utilised, is essential. In addition, ENT learning opportunities for undergraduates outside of the medical school curriculum should be augmented.


BMJ Open ◽  
2022 ◽  
Vol 12 (1) ◽  
pp. e054616
Author(s):  
Ricky Ellis ◽  
Peter A Brennan ◽  
Duncan S G Scrimgeour ◽  
Amanda J Lee ◽  
Jennifer Cleland

ObjectivesThe knowledge, skills and behaviours required of new UK medical graduates are the same but how these are achieved differs given medical schools vary in their mission, curricula and pedagogy. Medical school differences seem to influence performance on postgraduate assessments. To date, the relationship between medical schools, course types and performance at the Membership of the Royal Colleges of Surgeons examination (MRCS) has not been investigated. Understanding this relationship is vital to achieving alignment across undergraduate and postgraduate training, learning and assessment values.Design and participantsA retrospective longitudinal cohort study of UK medical graduates who attempted MRCS Part A (n=9730) and MRCS Part B (n=4645) between 2007 and 2017, using individual-level linked sociodemographic and prior academic attainment data from the UK Medical Education Database.MethodsWe studied MRCS performance across all UK medical schools and examined relationships between potential predictors and MRCS performance using χ2 analysis. Multivariate logistic regression models identified independent predictors of MRCS success at first attempt.ResultsMRCS pass rates differed significantly between individual medical schools (p<0.001) but not after adjusting for prior A-Level performance. Candidates from courses other than those described as problem-based learning (PBL) were 53% more likely to pass MRCS Part A (OR 1.53 (95% CI 1.25 to 1.87) and 54% more likely to pass Part B (OR 1.54 (1.05 to 2.25)) at first attempt after adjusting for prior academic performance. Attending a Standard-Entry 5-year medicine programme, having no prior degree and attending a Russell Group university were independent predictors of MRCS success in regression models (p<0.05).ConclusionsThere are significant differences in MRCS performance between medical schools. However, this variation is largely due to individual factors such as academic ability, rather than medical school factors. This study also highlights group level attainment differences that warrant further investigation to ensure equity within medical training.


Sign in / Sign up

Export Citation Format

Share Document