scholarly journals Rating of Physiotherapy Student Clinical Performance in A Paediatric Setting: Is It Possible to Gain Assessor Consistency?

Author(s):  
Tessa Fulton ◽  
Kerry Myatt ◽  
Garry W Kirwan ◽  
Courtney R Clark ◽  
Megan Dalton

Abstract Background: During workplace based clinical placements, best practice in assessment dictates that students should expect consistency between assessors rating their performance. To assist clinical educators (CEs) to provide a consistent assessment of physiotherapy student performance, nine paediatric vignettes depicting various standards of student performance, as assessed by the Assessment of Physiotherapy Practice (APP), were developed. The project aimed to evaluate the consistency of physiotherapy educators assessing student competence in a paediatric setting using video vignettes alongside the APP. Methods: Thirty-six CEs, with minimum 3-years clinical experience and had supervised a student within the past year, were sent three videos at four-week intervals. Videos depicted the same clinical scenario, however student performance varied with each video. Consistency among raters was assessed using percentage agreement to establish reliability. Results: The vignettes were assessed a combined total of 60 times. Across scenarios, percentage agreement at the not adequate level was 100%, and combining adequate or better, percentage agreement was >86%. The study demonstrated strong consensus when comparing not adequate to adequate or better student performance. Importantly, no student performance scripted as not adequate was passed by any assessor. Conclusions: Experienced educators demonstrate consistency in identifying a not adequate from adequate or better performance when assessing a one-off student performance using the APP. These validated video vignettes will be a valuable training tool to improve educator consistency when assessing student performance in paediatric physiotherapy.

2019 ◽  
Vol 4 (S1) ◽  
Author(s):  
Neil Tuttle ◽  
Sean A. Horan

Abstract Background Simulation-based learning (SBL) activities are increasingly used to replace or supplement clinical placements for physiotherapy students. There is limited literature evaluating SBL activities that replace on-campus teaching, and to our knowledge, no studies evaluate the role of SBL in counteracting the negative impact of delay between content teaching and clinical placements. The aims of this study were to (i) determine the effect on clinical placement performance of replacing 1 week of content teaching with a SBL activity and (ii) determine if a delay between content teaching and clinical placement impacted clinical placement performance. Methods This study is a retrospective cohort study. Participants included students in the first two clinical placements of a graduate-entry, masters-level program. Six hundred twenty-nine student placements were analysed—285 clinical placements where students undertook a 20-h SBL activity immediately prior to clinical placement were compared with 344 placements where students received traditional content. Of the placements where students received the SBL, 147 occurred immediately following content teaching and 138 had a delay of at least 5 weeks. Performance on clinical placement was assessed using the Assessment of Physiotherapy Practice (APP). Results There was a significant main effect of SBL with higher APP marks for the experimental group (3.12/4, SD = 0.25 vs 3.01/4, SD = 0.22), and post hoc analysis indicated marks were significantly higher for all seven areas of assessment. Students whose placements immediately followed content teaching performed better on mid-placement APP marks in two areas of assessment (analysis and planning, and intervention) compared to students for whom there was a delay. There were no statistically significant differences in relation to delay for end of placement APP marks. Conclusion Replacing 1 week of classroom teaching with a targeted, SBL activity immediately before placement significantly improved student performance on that clinical placement. A negative impact of delay was found on mid-placement, but not the end of placement APPs. Findings of improved performance when replacing a week of content teaching with a targeted SBL activity, and poorer performance on mid-placement marks with a delay between content teaching and clinical placement, may have implications for curriculum design.


BMJ Open ◽  
2020 ◽  
Vol 10 (6) ◽  
pp. e034945
Author(s):  
Christine Ossenberg ◽  
Marion Mitchell ◽  
Amanda Henderson

IntroductionCurrent perspectives present feedback as a dynamic, dialogic process. It is widely accepted that feedback can have an impact on workplace performance, however, how dialogic feedback is enacted with the learner in authentic healthcare settings is less apparent. This paper seeks to describe the design and development of an implementation study to promote the learner voice in the feedback process and improve feedback encounters between learners and learning partners in healthcare settings.Methods and analysisA quasi-experimental study design will be used to evaluate whether implementation of a work-based intervention to improve feedback impacts student performance during clinical placements in healthcare settings. Student performance will be measured at three time points: baseline (pre), mid-placement (post-test 1) and end-placement (post-test 2) in keeping with standard assessment processes of the participating university. The intervention is underpinned by Normalisation Process Theory and involves a layered design that targets learners and learning partners using best-practice education strategies. Data regarding participants’ engagement with feedback during clinical placements and participants’ level of adoption of the intervention will be collected at the completion of the clinical placement period.Ethics and disseminationThis study has ethics approval from both Griffith University and Metro South Health Human Research and Ethics committees. Dissemination of results will be local, national and international through forums, seminars, conferences and publications.


Author(s):  
Andy Bell ◽  
Jennifer Kelly ◽  
Peter Lewis

Abstract:Purpose:Over the past two decades, the discipline of Paramedicine has seen expediential growth as it moved from a work-based training model to that of an autonomous profession grounded in academia.  With limited evidence-based literature examining assessment in paramedicine, this paper aims to describe student and academic views on the preference for OSCE as an assessment modality, the sufficiency of pre-OSCE instruction, and whether or not OSCE performance is a perceived indicator of clinical performance.Design/Methods:A voluntary, anonymous survey was conducted to examine the perception of the reliability and validity of the Objective Structured Clinical Examination (OSCE) as an assessment tool by students sitting the examination and the academics that facilitate the assessment. Findings:The results of this study revealed that the more confident the students are in the reliability and validity of the assessment, the more likely they are to perceive the assessment as an effective measure of their clinical performance.  The perception of reliability and validity differs when acted upon by additional variables, with the level of anxiety associated with the assessment and the adequacy of feedback of performance cited as major influencers. Research Implications:The findings from this study indicate the need for further paramedicine discipline specific research into assessment methodologies to determine best practice models for high quality assessment.Practical Implications:The development of evidence based best practice guidelines for the assessment of student paramedics should be of the upmost importance to a young, developing profession such as paramedicine.Originality/Value: There is very little research in the discipline specific area of assessment for paramedicine and discipline specific education research is essential for professional growth.Limitations:The principal researcher was a faculty member of one of the institutions surveyed.  However, all data was non identifiable at time of data collection.  Key WordsParamedic; paramedicine; objective structured clinical examinations; OSCE; education; assessment.


2021 ◽  
pp. e20200018
Author(s):  
Sarah Wojkowski ◽  
Kathleen E. Norman ◽  
Paul Stratford ◽  
Brenda Mori

Purpose: This research examines 1 year of cross-sectional, Canada-wide ratings from clinical instructors using the Canadian Physiotherapy Assessment of Clinical Performance (ACP) and analyzes the performance profiles of physiotherapy students’ performance ratings over the course of their entry-to-practice clinical placements. Method: Canadian physiotherapy programmes that use the ACP were invited to submit anonymized, cross-sectional data for placements completed during 2018. Descriptive analyses and summary statistics were completed. Mixed-effects modelling was used to create typical performance profiles for each evaluation criterion in the ACP. Stepwise ordered logistic regression was also completed. Results: Ten programmes contributed data on 3,290 placements. Profiles were generated for each ACP evaluative item by means of mixed-effects modelling; three profiles are presented. In all cases, the predicted typical performance by the end of 24 months of study was approximately the rating corresponding to entry level. Subtle differences among profiles were identified, including the rate at which a student may be predicted to receive a rating of “entry level.” Conclusions: This analysis identified that, in 2018, the majority of Canadian physiotherapy students were successful on clinical placements and typically achieved a rating of “entry level” on ACP items at the end of 24 months.


2018 ◽  
Vol 25 (4) ◽  
pp. 266-271 ◽  
Author(s):  
R. Lee Tyson ◽  
Susan Brammer ◽  
Diana McIntosh

BACKGROUND: This article summarizes the experiences that a Midwest college of nursing had when telepsychiatry was introduced for psychiatric-mental health post-master’s nurse practitioner students to use in a clinical internship. AIMS: Implications for nurse practitioner educators will be identified, and recommendations for future research will be explored. METHOD: Described are the following: (1) policies and procedures the institution considered, (2) challenges that were encountered by faculty and students, and (3) strategies and limitations of these strategies defining best practice, what didactic content should be taught, and how clinical placements needed to be structured. RESULTS: Implications for nurse practitioner educators, practice, and research are identified. CONCLUSIONS: It is clear that telepsychiatry has an important role in the clinical education of psychiatric-mental health nurse practitioners. It is working well as a clinical internship option. The college of nursing is continuing to examine and address issues and is looking forward to enhancing the telepsychiatry experiences for students in the future.


2012 ◽  
Vol 92 (3) ◽  
pp. 416-428 ◽  
Author(s):  
Kathryn E. Roach ◽  
Jody S. Frost ◽  
Nora J. Francis ◽  
Scott Giles ◽  
Jon T. Nordrum ◽  
...  

Background Based on changes in core physical therapy documents and problems with the earlier version, the Physical Therapist Clinical Performance Instrument (PT CPI): Version 1997 was revised to create the PT CPI: Version 2006. Objective The purpose of this study was to validate the PT CPI: Version 2006 for use with physical therapist students as a measure of clinical performance. Design This was a combined cross-sectional and prospective study. Methods A convenience sample of physical therapist students from the United States and Canada participated in this study. The PT CPI: Version 2006 was used to collect CPI item–level data from the clinical instructor about student performance at midterm and final evaluation periods in the clinical internship. Midterm evaluation data were collected from 196 students, and final evaluation data were collected from 171 students. The students who participated in the study had a mean age of 24.8 years (SD=2.3, range=21–41). Sixty-seven percent of the participants were from programs in the United States, and 33% were from Canada. Results The PT CPI: Version 2006 demonstrated good internal consistency, and factor analysis with varimax rotation produced a 3-factor solution explaining 94% of the variance. Construct validity was supported by differences in CPI item scores between students on early compared with final clinical experiences. Validity also was supported by significant score changes from midterm to final evaluations for students on both early and final internships and by fair to moderate correlations between prior clinical experience and remaining course work. Limitations This study did not examine rater reliability. Conclusion The results support the PT CPI: Version 2006 as a valid measure of physical therapist student clinical performance.


Sign in / Sign up

Export Citation Format

Share Document