Using Student Assessment Data, Analyzing Instructional Practices, and Making Necessary Adjustments That Improve Student Outcomes

Author(s):  
Erica S. Lembke ◽  
R. Alex Smith ◽  
Cathy Newman Thomas ◽  
Kristen L. McMaster ◽  
Erica N. Mason
F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 450
Author(s):  
Ken Masters ◽  
Nadia Al-Wardy

Determining a Hofstee cut-off point in medical education student assessment is problematic: traditional methods can be time-consuming, inaccurate, and inflexible.  To counter this, we developed a simple Android app that receives raw, unsorted student assessment data in .csv format, allows for multiple judges’ inputs, mean or median inputs, calculates the Hofstee cut-off mathematically, and outputs the results with other guiding information. The app contains a detailed description of its functionality.


2005 ◽  
Vol 86 (9) ◽  
pp. 700-706 ◽  
Author(s):  
Kathryn Parker Boudett ◽  
Richard J. Murnane ◽  
Elizabeth City ◽  
Liane Moody

2020 ◽  
Author(s):  
Stephanie Cupp ◽  
Paolo Moore ◽  
Norman Fortenberry

2019 ◽  
Vol 19 (1) ◽  
Author(s):  
Boaz Shulruf ◽  
Gary Velan ◽  
Lesley Forster ◽  
Anthony O’Sullivan ◽  
Peter Harris ◽  
...  

Abstract Background There is an ongoing debate about the impact of studying medicine in rural vs. metropolitan campuses on student assessment outcomes. The UNSW Medicine Rural Clinical School has five main campuses; Albury-Wodonga, Coffs Harbour, Griffith, Port Macquarie and Wagga Wagga. Historical data of student assessment outcomes at these campuses raised concerns regarding potential biases in assessment undertaken, as well as the availability and quality of learning resources. The current study aims to identify the extent to which the location of examination (rural versus metropolitan) has an impact on student marks in OSCEs. Methods Assessment data was employed for this study from 275 medical students who sat their final examinations in Years 3 and 6 of the undergraduate Medicine program at UNSW in 2018. The data consists of matched student assessment results from the Year 3 (Y3) MCQ examination and OSCE, and from the Year 6 (Y6) MCQ, OSCE and management viva examinations. The analysis used Univariate Analysis of Variance and linear regression models to identify the impact of site of learning and site of examination on assessment outcomes. Results The results demonstrate that neither site of learning nor site of examination had any significant impact on OSCE or Management Viva assessment outcomes while potential confounders are controlled. Conclusion It is suggested that some of the supposed disadvantages inherent at rural campuses are effectively mitigated by perceived advantages; more intensive interaction with patients, the general and medical communities at those sites, as well as effective e-learning resources and moderation of assessment grades.


2017 ◽  
Vol 16 (1) ◽  
pp. rm1 ◽  
Author(s):  
Marilyne Stains ◽  
Trisha Vickrey

The discipline-based education research (DBER) community has been invested in the research and development of evidence-based instructional practices (EBIPs) for decades. Unfortunately, investigations of the impact of EBIPs on student outcomes typically do not characterize instructors’ adherence to an EBIP, often assuming that implementation was as intended by developers. The validity of such findings is compromised, since positive or negative outcomes can be incorrectly attributed to an EBIP when other factors impacting implementation are often present. This methodological flaw can be overcome by developing measures to determine the fidelity of implementation (FOI) of an intervention, a construct extensively studied in other fields, such as healthcare. Unfortunately, few frameworks to measure FOI in educational settings exist, which likely contributes to a lack of FOI constructs in most impact studies of EBIPs in DBER. In this Essay, we leverage the FOI literature presented in other fields to propose an appropriate framework for FOI within the context of DBER. We describe how this framework enhances the validity of EBIP impact studies and provide methodological guidelines for how it should be integrated in such studies. Finally, we demonstrate the application of our framework to peer instruction, a commonly researched EBIP within the DBER community.


2020 ◽  
Vol 14 ◽  
pp. 89-111
Author(s):  
Lisa Dyson ◽  

Secondary schools in New Zealand use assessment data for school selfevaluation, but little research has explored exactly how schools are using these data. This case study of selected high schools explored the perspectives of teachers and school leaders whose schools had recently implemented a student assessment tracking and monitoring “traffic light” tool. Informed by a realist approach, the study involved a series of three focus groups followed by individual interviews with 13 educators at four secondary schools that had been identified as effective at school self-evaluation. The results highlight that data use processes led to changes in practice in teachers’ work and contributed to structural changes in these schools. This study shows that data use can be a powerful force, with the potential for good, but it also raises some concerns about the unintended consequences of the use of assessment data.


Sign in / Sign up

Export Citation Format

Share Document