summative feedback
Recently Published Documents


TOTAL DOCUMENTS

34
(FIVE YEARS 12)

H-INDEX

5
(FIVE YEARS 1)

Author(s):  
Mireilla Bikanga Ada

AbstractThis paper reports an evaluation of a mobile web application, “MyFeedBack”, that can deliver both feedback and marks on assignments to students from their lecturer. It enables them to use any device anywhere, any time to check on, and receive their feedback. It keeps the feedback private to the individual student. It enables and successfully fosters dialogue about the feedback between the students and the educator. Feedback and marks were already being delivered using the institution’s learning environment/management system “Moodle”. The study used a sequential explanatory mixed-method approach. Two hundred thirty-nine (239) participants were reported on their experiences of receiving feedback and divided among several groups: (a) feedback delivered in “Moodle”, (b) formative feedback in “MyFeedBack”, and (c) summative feedback in “MyFeedBack”. Overall, results showed a statistically significant more positive attitude towards “MyFeedBack” than “Moodle”, with the summative assessment subgroup being more positive than the formative subgroup. There was an unprecedented increase in communication and feedback dialogue between the lecturer and the students. Qualitative results enriched and complemented the findings. The paper provides guidelines for an enabling technology for assessment feedback. These offer insight into the extent to which any of the new apps and functionalities that have become available since this study might likely be favourably viewed by learners and help achieve the desired pedagogical outcomes. These include: (1) accessible using any device, making feedback accessible anywhere, anytime; (2) display feedback first (before the grade/mark); (3) enable personalisation of group feedback by the teacher; (4) provide privacy for each student; (5) facilitate dialogue and communication about the feedback; and (6) include a monitoring feature. Three goals already put forward in the literature—(1) making the feedback feel more personal, (2) getting a quicker turnround by making it easier for the teachers to achieve this, and (3) prompting more dialogue between the educators and students—are advanced by this study which shows how they can be supported by software, and that when they are achieved then users strongly approve them.


2021 ◽  
pp. bmjqs-2020-012105
Author(s):  
Jennifer S Myers ◽  
Jeanne M Kin ◽  
John E Billi ◽  
Kathleen G Burke ◽  
Richard Van Harrison

PurposeA3 problem solving is part of the Lean management approach to quality improvement (QI). However, few tools are available to assess A3 problem-solving skills. The authors sought to develop an assessment tool for problem-solving A3s with an accompanying self-instruction package and to test agreement in assessments made by individuals who teach A3 problem solving.MethodsAfter reviewing relevant literature, the authors developed an A3 assessment tool and self-instruction package over five improvement cycles. Lean experts and individuals from two institutions with QI proficiency and experience teaching QI provided iterative feedback on the materials. Tests of inter-rater agreement were conducted in cycles 3, 4 and 5. The final assessment tool was tested in a study involving 12 raters assessing 23 items on six A3s that were modified to enable testing a range of scores.ResultsThe intraclass correlation coefficient (ICC) for overall assessment of an A3 (rater’s mean on 23 items per A3 compared across 12 raters and 6 A3s) was 0.89 (95% CI 0.75 to 0.98), indicating excellent reliability. For the 20 items with appreciable variation in scores across A3s, ICCs ranged from 0.41 to 0.97, indicating fair to excellent reliability. Raters from two institutions scored items similarly (mean ratings of 2.10 and 2.13, p=0.57). Physicians provided marginally higher ratings than QI professionals (mean ratings of 2.17 and 2.00, p=0.003). Raters averaged completing the self-instruction package in 1.5 hours, then rated six A3s in 2.0 hours.ConclusionThis study provides evidence of the reliability of a tool to assess healthcare QI project proposals that use the A3 problem-solving approach. The tool also demonstrated evidence of measurement, content and construct validity. QI educators and practitioners can use the free online materials to assess learners’ A3s, provide formative and summative feedback on QI project proposals and enhance their teaching.


Author(s):  
Nico Willert ◽  

For the past decade, video game- and gamification-elements get used in different fields of research. However, a contextualized usage of these elements is still underrepresented in the current research. For that reason, this research tries to identify contextualized game-elements in e-learning environments for computer science education. A systematic literature review examines the current overlap of feedback in computer science education by the use of game-elements. The relevant papers were identified by a combination of search-terms and analyzed according to a defined scope, that focuses on formative and summative feedback. In a nutshell, the majority of provided feedback in computer science education, that is not just given by an instructor, is often implemented by automated code tests. These are supported through techniques to monitor the performance of the student and their progress towards the set goal. Game- or gamification-elements do play a subordinate role, when providing feedback and are often just to enhance the monitoring process.


2020 ◽  
Author(s):  
Benjamin De Witte ◽  
Charles Barnouin ◽  
Richard Moreau ◽  
Arnaud Leleve ◽  
Xavier Martin ◽  
...  

Abstract Background: General agreement exists upon the importance of acquiring laparoscopic skills outside the operation room. During the past two decades, simulation-based training and simulators have been more extensively used in surgeons’ training. Nevertheless learning through simulation-based systems is hindered by several flaws. High-fidelity simulators are cost-prohibitive which limits training opportunities. Their use also elicits a high cognitive load. Low-fidelity simulators lack in haptic, direct and summative feedback. Our goal is to develop a new low fidelity simulator integrating effective learning features as a new assessment variable while limiting the associated costs. We also aim at assessing its primary validity. Methods: We engineered a low fidelity simulator for teaching basic laparoscopic skills taking into account psychomotor skills, direct and summative feedback and engineering key features (haptic feedback and complementary assessment variables). Afterward, 77 participants with 4 different surgical skill levels (17 experts; 12 intermediates; 28 inexperienced interns and 20 novices) tested the simulator. We checked the content validity using a 10 point Likert scale. We also assessed the simulator discriminative power by comparing the 4 groups’ performance over two sessions. To do so, we used 3 variables: time, number of errors (collisions) and affine velocity. Results: The content validation mean value score was 7.57/10. The statistical analysis yielded performance discrepancies on the selected variables among the groups (p<0.001). Conclusion: We developed an affordable and validated simulator for testing and learning basic laparoscopic skills. The results exhibit three levels of performance on the selected variables. Experts and intermediates outperformed the inexperienced interns who in turn outperformed the novices. Results show that the embedded evaluation variables are complimentary and provide realistic results. The inclusion of a new variable and, meanwhile, haptic, direct and summative feedback is innovative regarding low-fidelity simulators. Limitations and implementation conditions of the simulator in the surgical curricula are discussed.


Author(s):  
Melissa Fanshawe ◽  
Nicole Delaney ◽  
Alwyn Powell

In higher education learning environments, there is a dual need for educators to use supportive strategies to motivate students throughout the course, while also aiming to increase the capacity of students to self-regulate their learning. Using instantaneous tools to deliver formative or summative feedback through digital technology has been shown to lead to higher achievement and retention rates. This chapter shows how digital badges can provide instantaneous feedback to support students to feel belonging in the online community and develop self-regulation skills. Instantaneous feedback tools can be used to provide teacher presence throughout higher education courses to increase student engagement, retention, and achievement.


Author(s):  
Zineb Djoub

To support students, make effective use of feedback to improve their learning, this chapter provides practical tips and strategies for teachers to stimulate their students' interest in feedback, assimilate its significant role and get involved in interpreting, reflecting and acting upon feedback comments. The author focuses on both summative and formative feedback. For summative feedback, one's concern is to encourage students to interpret grades/marks, reflect upon them and transform them into plans and actions. This is through using reflective worksheets and other post-exam tasks in class which are designed by the author. Feedback within self, peer and group assessment approaches is also concerned in this chapter. Other kinds of reflective worksheets are suggested to be used to reflect on the student learning process as part of the student portfolio, journal or set separately, in addition to the use of technology, i.e., class blogs to enhance such reflection.


2019 ◽  
Vol 54 (4-5) ◽  
pp. 266-274
Author(s):  
Emilee J Delbridge ◽  
Tanya Wilson ◽  
James D McGregor ◽  
Jared S Ankerman

Literature within residency education states that directly observing resident–patient visits with the goal of providing formative and summative feedback to learners is helpful for resident skill development. However, limited literature exists regarding what specifically is most effective to observe and evaluate. Furthermore, the perspectives of learners are not always taken into consideration in the development and implementation of direct observation or video review of resident–patient encounters. This article overviews some of the current literature relevant to family medicine training and provides a description of some of the changes in one residency’s use of recorded encounters. Suggestions are provided for future steps for family medicine residencies to effectively utilize video review.


Sign in / Sign up

Export Citation Format

Share Document