progress testing
Recently Published Documents


TOTAL DOCUMENTS

82
(FIVE YEARS 17)

H-INDEX

18
(FIVE YEARS 1)

2021 ◽  
Vol Publish Ahead of Print ◽  
Author(s):  
Vincent Dion ◽  
Christina St-Onge ◽  
Ilona Bartman ◽  
Claire Touchie ◽  
Debra Pugh
Keyword(s):  

2020 ◽  
Vol 7 ◽  
Author(s):  
Lisa Herrmann ◽  
Christina Beitz-Radzio ◽  
Dora Bernigau ◽  
Stephan Birk ◽  
Jan P. Ehlers ◽  
...  

Author(s):  
Marita Fadhilah ◽  
Nurmila Sari ◽  
Sophie Dwiyanti ◽  
Erike A Suwarsono ◽  
Fika Ekayanti

Introduction: Progress testing (PT) reflects students’ knowledge development and is a valuable indicator for curriculum evaluation. Since 2009, Faculty of Medicine Syarif Hidayatullah State Islamic University Jakarta (FMSH) has been conducting PT every year as a formative assessment. In 2012, curriculum reform was addressed to revise the 2005 curriculum; until then PT and curriculum evaluation were not conducted concurrently. This study aims to evaluate PT and assess whether there is a relationship between PT performance and final scores in modules, as part of curriculum evaluation.Method: It reviews PT for two cohorts: 571 students in 2015 and 562 students in 2016. 120 systembased topics were addressed in the PT. In this study the final scores for the old (2015) and new (2016) curriculum neuropsychiatry modules are reviewed, since their scores were lower than for other modules. Comparisons were made using ANOVA. Pearson correlations were calculated to examine the relationship between PT and final module scores.Results: This study revealed that PT scores between each grade (p < 0.001) from 2015 to 2016 improved significantly (54.49 ± 7.43 and 55.07 ± 8.32; p < 0.001). The mean of the final score of the new neuropsychiatry module was 69.36 ± 3.78 while the old one was 70.92 ± 3.99. Furthermore, Pearson correlation showed a weak correlation between final scores for the neuropsychiatry module and PT scores in 2015 (ρ = 0.191, p = 0.011).Discussion: PT scores increased significantly. Despite the final score of the new neuropsychiatry module being lower than the old one, there was heterogeneity in scores within the old neuropsychiatry module. The small number of neuropsychiatry items in the PT explains why the correlation between PT and final scores was weak. The weak correlation between final scores for the neuropsychiatry module and the PT scores in 2015; PT and final module scores seem reliable as indicators of curriculum evaluation. Further study is needed to analyze more cohort PT scores and modules.International Journal of Human and Health Sciences Vol. 05 No. 01 January’21 Page: 62-68


2020 ◽  
Vol 17 (3) ◽  
pp. 195-211
Author(s):  
James Thompson ◽  
◽  
Donald Houston ◽  

The paramedic profession is rapidly evolving and has witnessed significant expansion in the scope of practice and the public expectations of the paramedic role in recent years. Increasing demands for greater knowledge and skills for paramedics has implications for the university programs tasked with their pre-employment training. The certification of paramedic student knowledge typically occurs incrementally across degree programs with aggregate results used to determine student qualification. There are concerns regarding learning sustainability of this approach. The narrowed focus of assessment practices within siloed subjects often neglects the more holistic and integrated paramedic knowledge requirements. Programmatic assessment is becoming increasingly common within medical education, offering more comprehensive, longitudinal information about student knowledge, ability and progress, obtained across an entire program of study. A common instrument of programmatic assessment is the progress test, which evaluates student understanding in line with the full broad expectations of the discipline, and is administered frequently across an entire curriculum, regardless of student year level. Our project explores the development, implementation and evaluation of modified progress testing approaches within a single semester capstone undergraduate paramedic topic. We describe the first reported approaches to interpret the breadth of knowledge requirements for the discipline and prepare and validate this as a multiple-choice test instrument. We examined students at three points across the semester, twice with an identical MCQ test spaced 10 weeks apart, and finally with an oral assessment informed by student’s individual results on the second test. The changes in student performance between two MCQ tests were evaluated, as were the results of the final oral assessment. We also analysed student feedback relating to their perceptions and experiences. Mean student correct response increased by 65 percent between test 1 and 2, with substantial declines in numbers of incorrect and don’t know responses. Our results demonstrate a substantial increase in correct responses between the two tests, a high mean score in the viva, and broad agreement about the significant impact the approaches have had on learning growth.


2020 ◽  
Vol 30 (2) ◽  
pp. 943-953
Author(s):  
D. R. Rutgers ◽  
J. P. J. van Schaik ◽  
C. L. J. J. Kruitwagen ◽  
C. Haaring ◽  
W. van Lankeren ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document