scholarly journals Evaluating Student Performance on Computer-Based versus Handwritten Exams: Evidence from a Field Experiment in the Classroom

2019 ◽  
Vol 52 (4) ◽  
pp. 757-762
Author(s):  
Besir Ceka ◽  
Andrew J. O’Geen

ABSTRACTThe use of course-management software such as Blackboard, Moodle, and Canvas has become ubiquitous at all levels of education in the United States. A potentially useful feature of these products is the ability for instructors to administer assessments including quizzes and tests that are flexible, easy to customize, and quick and efficient to grade. Although computer-based assessments offer clear advantages, instructors might be concerned about their effect on student performance. This article evaluates whether student performance differs between handwritten and computer-based exams through a randomized field experiment conducted in a research methods course. Overall, our findings suggest a significant improvement in student performance on computer-based exams that is driven primarily by the relative ease of producing thorough responses on the computer versus by hand.

2019 ◽  
Vol 43 (3-4) ◽  
pp. 152-188
Author(s):  
Onur Altindag ◽  
Theodore J. Joyce ◽  
Julie A. Reeder

Between July 2005 and July 2007, the Oregon Supplemental Nutrition Program for Women, Infants and Children program conducted the largest randomized field experiment (RFE) ever in the United States to assess the effectiveness of a low-cost peer counseling intervention to promote exclusive breastfeeding. We undertook a within-study comparison of the intervention using unique administrative data between July 2005 and July 2010. We found no difference between experimental and nonexperimental estimates but failed to determine correspondence based on more stringent criteria. We show that tests for nonconsent bias in the benchmark RFE might provide an important signal as to confounding in the nonexperimental estimates.


2012 ◽  
Vol 92 (3) ◽  
pp. 416-428 ◽  
Author(s):  
Kathryn E. Roach ◽  
Jody S. Frost ◽  
Nora J. Francis ◽  
Scott Giles ◽  
Jon T. Nordrum ◽  
...  

Background Based on changes in core physical therapy documents and problems with the earlier version, the Physical Therapist Clinical Performance Instrument (PT CPI): Version 1997 was revised to create the PT CPI: Version 2006. Objective The purpose of this study was to validate the PT CPI: Version 2006 for use with physical therapist students as a measure of clinical performance. Design This was a combined cross-sectional and prospective study. Methods A convenience sample of physical therapist students from the United States and Canada participated in this study. The PT CPI: Version 2006 was used to collect CPI item–level data from the clinical instructor about student performance at midterm and final evaluation periods in the clinical internship. Midterm evaluation data were collected from 196 students, and final evaluation data were collected from 171 students. The students who participated in the study had a mean age of 24.8 years (SD=2.3, range=21–41). Sixty-seven percent of the participants were from programs in the United States, and 33% were from Canada. Results The PT CPI: Version 2006 demonstrated good internal consistency, and factor analysis with varimax rotation produced a 3-factor solution explaining 94% of the variance. Construct validity was supported by differences in CPI item scores between students on early compared with final clinical experiences. Validity also was supported by significant score changes from midterm to final evaluations for students on both early and final internships and by fair to moderate correlations between prior clinical experience and remaining course work. Limitations This study did not examine rater reliability. Conclusion The results support the PT CPI: Version 2006 as a valid measure of physical therapist student clinical performance.


2022 ◽  
pp. 096100062110696
Author(s):  
Vinit Kumar ◽  
Brady Lund

This study compares attributes (authors, journals, populations, theories, methods) of information seeking behavior studies based in the United States and India, based on a search of published articles from 2011 to 2020 in relevant information science databases. The findings indicate major differences in information behavior research among the two countries. Information behavior research in the United States tends to focus more on health and medicine-related research populations, employ greater use of information behavior theories, and use a variety of quantitative and qualitative research methods (as well as mixed methods). Information behavior research in India tends to focus more on general populations, use less theory, and rely heavily on quantitative research methods—particularly questionnaires (88% of studies). These findings suggest a healthy and intellectually-diverse information behavior research area in the United States and ample room for growth of the research area within India.


Author(s):  
Lorelei R. Coddington

Recent shifts in standards of instruction in the United States call for a balance between conceptual and procedural types of teaching and learning. With this shift, an emphasis has also been placed on ensuring teachers have the knowledge and tools to support students to improve student performance. Since many struggle in learning mathematics, teachers need practical ways to support students while also building their conceptual knowledge. Research has highlighted many promising approaches and strategies that can differentiate instruction and provide needed support. This chapter highlights various examples found in the research and explains how the approaches and strategies can be used to maximize student learning in the inclusive classroom.


Sign in / Sign up

Export Citation Format

Share Document