scholarly journals Online Course Evaluations Response Rates

2013 ◽  
Vol 6 (3) ◽  
pp. 333-338 ◽  
Author(s):  
Faruk Guder ◽  
Mary Malliaris

This paper studies the reasons for low response rates in online evaluations. Survey data are collected from the students to understand factors that might affect student participation in the course evaluation process. When course evaluations were opened to the student body, an email announcement was sent to all students, and a reminder email was sent a week later. Our study showed that participation rates increased not only when emails were sent, but also when faculty used in-class time to emphasize the importance of completing the evaluations.

2016 ◽  
Vol 43 (4) ◽  
pp. 276-284 ◽  
Author(s):  
Eric D. Sundstrom ◽  
Erin E. Hardin ◽  
Matthew J. Shaffer

2017 ◽  
Vol 47 (2) ◽  
pp. 106-120
Author(s):  
Jovan F Groen ◽  
Yves Herry

At one of Ontario’s largest universities, the University of Ottawa, course evaluations involve about 6,000 course sections and over 43,000 students every year. This paper-based format requires over 1,000,000 sheets of paper, 20,000 envelopes, and the support of dozens of administrative staff members. To examine the impact of a shift to an online system for the evaluation of courses, the following study sought to compare participation rates and evaluation scores of an online and paper-based course evaluation system. Results from a pilot group of 10,417 students registered in 318 courses suggest an average decrease in participation rate of 12–15% when using an online system. No significant differences in evaluation scores were observed. Instructors and students alike shared positive reviews about the online system; however, they suggested that an in-class period be maintained for the electronic completion of course evaluations.  


Author(s):  
Helen Gordon ◽  
Eleanor Stevenson ◽  
Ann Brookhart ◽  
Marilyn H Oermann

AbstractAbstract In most schools of nursing, students rate their satisfaction with courses and teachers at the end of the semester. Low response rates on these evaluations make it difficult to interpret the results. Students were incentivized to complete their course evaluations by adding 1–2 points to one test score in the course in exchange for 85 % or higher participation by the total cohort. Ongoing monitoring and communication to students by faculty during the process was critical to motivating students to complete course evaluations. When the incentive was employed, student participation ranged from a low of 90 % to a high of 100 % response rate. The added points did not change any of the students' grades. Incentivizing students to complete course evaluations is an effective strategy to boost response rates without changing final course grades.


2021 ◽  
pp. 136548022110346
Author(s):  
Shivaun O’Brien ◽  
Gerry McNamara ◽  
Joe O’Hara ◽  
Martin Brown ◽  
Craig Skerritt

School self-evaluation (SSE) or data-based decision making is now a common feature of mainstream education in an increasing number of jurisdictions. The participation of stakeholders including students, is promoted internationally as a key feature of effective SSE. Despite this, very little research has been carried out on how education systems might involve students in SSE and even less research has explored how student involvement can move beyond mere tokenism. Similar to many other jurisdictions, Irish schools are encouraged to include students in SSE. However, the research to date would indicate that while students are frequently consulted through the use of surveys they have little or no involvement in decisions that are made as part of the SSE process at a whole school level. This case study explores an atypical approach to student engagement in SSE which was tested in one Irish post-primary school where students participated as co-researchers along with their teachers in the SSE process. In doing so, student participation in SSE shifted from student as data sources to students as co-researchers. Students became members of the SSE Team, responsible for consulting with the wider staff team, student body and parents. They were actively involved in the completion of a whole school self-evaluation report on assessment and the development of a school improvement plan. The study outlines the key stages of the project and how student participation evolved through the process. Interviews conducted with both the teacher and student members of the SSE Team illuminates the experience of the students and staff on the SSE team. The findings indicate that this approach resulted in significant positive outcomes for the school and the individuals involved, but there were also a number of challenges. Student involvement resulted in greater awareness among, and participation of the wider staff team in the SSE process. However, it required more resources and time than is usually the case for an SSE process in Irish schools. The research suggests that this level of participation by students may require a more systematic and sustained engagement of students in decision making at a classroom level in order to build capacity of students to contribute to decision making at a whole school level on an ongoing basis. This study may have an application in jurisdictions aiming to include students in SSE, particularly at a higher level, and it also provides a glimpse into the deliberate planning and structures required if schools are to move beyond an instrumentalist, compliance model of ‘student voice’ towards a more authentic model of inclusive democracy.


2006 ◽  
Vol 3 (8) ◽  
Author(s):  
Kelly D. Bradley ◽  
James W. Bradley

In higher education, course evaluations are given much attention, with results directly impacting such events as merit review and tenure/promotion. The accurate presentation and proper use of the evaluation results is a critical issue. The typical course evaluation process involves distributing a Likert-type survey to a class, compiling the data and reporting means/standard deviations (classical test theory approach, CTT). One alternative analytical technique is the Rasch model. A theoretical review of each model and an empirical example utilizing end of semester course evaluations from an introductory statistics course taught at a Midwest community college is presented to demonstrate the step-by-step process of feedback via each model. A contention is made that the CTT summary is not producing a valid picture of the evaluation data. The survey research community and institutions analyzing similar rating scale data will benefit from the results of this study as it provides a sound methodology for analyzing such data. The education community will also benefit by receiving better-informed results.


2015 ◽  
Vol 27 (5) ◽  
pp. 3-13
Author(s):  
Jacqueline E. McLaughlin ◽  
Amy Sloane ◽  
Elizabeth Billings ◽  
Mary T. Roth

Author(s):  
Bridget Khursheed

This chapter examines usability evaluation in the context of the Diploma in Computing via the Internet offered by the University of Oxford Department for Continuing Education and, to some extent, its on-site course partner. This ongoing online course is aimed at adult non-university (the “real world” of the chapter title) students. The chapter follows the usability evaluation process through the life cycle of course development, delivery and maintenance, analysing the requirements and actions of each stage and how they were implemented in the course. It also discusses how pedagogical evaluation must be considered as part of this process, as well as the more obvious software considerations, and how this was achieved within the course. Finally it draws some conclusions concerning the enhancements to course usability of the virtual classroom and how this atypical evaluation material can and should be integrated into an overall usability evaluation picture.


2019 ◽  
Vol 7 (4) ◽  
pp. 484-492
Author(s):  
Claire Tilbury ◽  
Claudia S Leichtenberg ◽  
Bart L Kaptein ◽  
Lennard A Koster ◽  
Suzan H M Verdegaal ◽  
...  

Background: Compliance rates with patient-reported outcome measures (PROMs) collected alongside arthroplasty registries vary in the literature. We described the feasibility of a routinely collected set PROMs alongside the Dutch Arthroplasty Register. Methods: The longitudinal Leiden Orthopaedics Outcomes of OsteoArthritis Study is a multicenter (7 hospitals), observational study including patients undergoing total hip or total knee arthroplasty (THA or TKA). A set of PROMs: Short Form-12, EuroQol 5 Dimensions, Hip/Knee injury and Osteoarthritis Outcome Score, Oxford Hip/Knee Score was collected preoperatively and at 6, 12, 24 months, and every 2 years thereafter. Participation rates and response rates were recorded. Results: Between June 2012 and December 2014, 1796 THA and 1636 TKA patients were invited, of whom 1043 THA (58%; mean age 68 years [standard deviation, SD: 10]) and 970 TKA patients (59%; mean age 71 years [SD 9.5]) participated in the study. At 6 months, 35 THA/38 TKA patients were lost to follow-up. Response rates were 90% for THA (898/1000) and 89% for TKA (827/932) participants. At 1 and 2 years, 8 and 18 THA and 17 and 11 TKA patients were lost to follow-up, respectively. The response rates among those eligible were 87% (866/992) and 84% (812/972) for THA and 84% (771/917) and 83% (756/906) for TKA patients, respectively. The 2-year questionnaire was completed by 78.5% of the included THA patients and by 77.9% of the included TKA patients. Conclusions: About 60% of patients undergoing THA or TKA complete PROMs preoperatively, with more than 80% returning follow-up PROMs. To increase the participation rates, more efforts concerning the initial recruitment of patients are needed.


Sign in / Sign up

Export Citation Format

Share Document