Journal of Assessment in Higher Education
Latest Publications


TOTAL DOCUMENTS

7
(FIVE YEARS 7)

H-INDEX

0
(FIVE YEARS 0)

Published By University Of Florida George A Smathers Libraries

2642-0694

2021 ◽  
Vol 2 (1) ◽  
pp. 20-34
Author(s):  
C. Dale Carpenter

Student evaluations of teaching occur at most universities and colleges in the United States and are used for a variety of purposes including course improvement and as data to evaluate instructors.  Increasingly, universities manage the collection of student perception data about courses and teaching with commercially available software.  This is a report of the work of one university to review the process of collecting and using student assessment of instruction data and to determine how the data would be used.  The work of a task force to examine a process in place for ten years by seeking input from stakeholders, reviewing ten years of collected data, and reviewing the literature is presented in a case study format with task force recommendations and a report on subsequent implementation.


2021 ◽  
Vol 2 (1) ◽  
pp. 54-62
Author(s):  
Chosang Tendhar

The purposes of this study are to assess the utility of self-appraisals and ratings of program directors (PDs) and to introduce new ways to use self-assessed and rating scale data. The data for this study was collected from graduates of our school who were enrolled in different residency programs around the country. The interns and PDs completed a similar set of questions. The correlation between the ratings of the two group was .21. The Cronbach’s alpha of interns and PD surveys were .89 and .97, respectively. The interns consistently rated themselves lower compared to ratings the PDs assigned them. The two groups agreed on the areas of strengths and weaknesses based on their mean ratings and rank-ordering of competencies. This study proposes that lowest mean ratings of measures that appear at the bottom in the rank-ordering be considered as areas that deserve special attention. The results of this study brought validity evidence to the utility of self-appraisals and PD’s ratings of interns.  


2021 ◽  
Vol 2 (1) ◽  
pp. 1-19
Author(s):  
Frederick Burrack ◽  
Dorothy Thompson

Programmatic and institutional assessment initiatives have emerged and continuously evolved across higher education institutions through the early part of the twenty-first century. These initiatives have stemmed from a growing emphasis on assessing the quality of learning that occurs throughout the collegiate education. An assessment process that involves faculty and staff collecting, analyzing and discussing the data over time to guide improvement decisions sounds like a reasonable pursuit. Unfortunately, such a process sometimes results in apathy and dissention.  Technology has provided solutions that can remove the tedium and time-consumption from student learning assessment. The purpose of this article is to provide a thorough understanding of the assessment capabilities and data-collecting automaticity processes of Canvas. Provided are examples of ways to extract and disseminate Canvas data to be used for decisions making. The article includes (a) the structure of Canvas, (b) steps for how to set up Canvas for collecting student achievement data directly from coursework and sortable by outcomes and associated criteria, (c) strategies to export data from Canvas, and (d) ideas for visualizing outcome data.


2021 ◽  
Vol 2 (1) ◽  
pp. 35-53
Author(s):  
Kelly Anne Parkes ◽  
Jared Robert Rawlings

In this study, we report what music teacher educators (MTEs, N = 149) in higher education understand about assessment. We include their assessment pedagogy, their levels of assessment pedagogy efficacy (APE) at both programmatic (unit level) and personal levels (ProAPE and PeAPE respectively), and the relationship this efficacy has with their (MTEs) satisfaction of assessment pedagogies within their institutions. This mixed-methods study uses a convergent parallel design, with qualitative inductive coding and quantitative factor analyses, correlational analyses, and non-parametric tests. We determine that MTEs report some misunderstanding of the assessment lexicon nevertheless they hold mostly high levels of both personal and programmatic assessment pedagogy efficacy. Differences were observed between MTEs that graduated after 2008 than those who graduated prior to 2008. Findings center on higher education faculty comfort with assessment in higher education with implications for professional development and continued research in the area.


2020 ◽  
Vol 1 (1) ◽  
pp. 50-79
Author(s):  
Jennifer L. Restrepo ◽  
Katherine Perez ◽  
Eilyn Sanabria ◽  
Suzanne Lebin

Administrators are struggling to understand how to best promote and implement a culture of evidence-based decision making to stakeholders. The research study presented explored best practices on creating meaningful professional development experiences using both direct and indirect evidence of learning -- this article will describe the effectiveness of a certificate program designed to educate faculty about assessment and its impact on faculty learning gains, perceptions, and self-efficacy. The study used a pre-/post-test design to measure participant knowledge using quizzes for each of the four modules of the certificate and participant perceptions using a survey. The modules covered writing student learning and program outcomes, curriculum mapping, developing assessment methods, creating assessment instruments, collecting data, analyzing and reporting results, and using results for improvement. Certificate completers demonstrated increased knowledge of assessment terminology, procedures, and best practices, as well as improved assessment-related self-efficacy. However, their perception regarding assessment did not change. Data gathered through this study can help inform decisions on needed assessment-related faculty professional development activities.


2020 ◽  
Vol 1 (1) ◽  
pp. 30-49
Author(s):  
Darryl J Chamberlain ◽  
Russell Jeter

The goal of this paper is to propose a new method to generate multiple-choice items that can make creating quality assessments faster and more efficient, solving a practical issue that many instructors face. There are currently no systematic, efficient methods available to generate quality distractors (plausible but incorrect options), which are necessary for multiple-choice assessments that accurately assess students’ knowledge. We propose two methods to use technology to generate quality multiple-choice assessments: (1) manipulating the mathematical problem to emulate common student misconceptions or errors and (2) disguising options to protect the integrity of multiple-choice tests. By linking options to common student misconceptions and errors, instructors can use assessments as personalized diagnostic tools that can target and modify underlying misconceptions. Moreover, using technology to generate these quality distractors would allow for assessments to be developed efficiently, in terms of both time and resources. The method to disguise the options generated would have the added benefit of preventing students from working backwards from options to solution and thus would protect the integrity of the assessment.


2020 ◽  
Vol 1 (1) ◽  
pp. 1-29
Author(s):  
Liz Grauerholz ◽  
Patrice Lancey ◽  
Kristen Schellhase ◽  
Cory Watkins

Integration of major institutional research-based planning and evaluation processes is a mechanism to focus all university constituents on implementing and evaluating strategic initiatives designed to improve institutional quality and effectiveness. While educational program assessment to foster evidence-based improvements is strongly infused in the culture of many universities, drawing intentional connections between program assessment, which primarily focuses on student learning outcomes, and institutional strategic planning, can be challenging for faculty. This paper highlights the assessment work of three diverse disciplines in a large public research institution that have articulated connections between their program’s student learning and program outcomes and elements of the university strategic plan. Case studies are reported to show how program assessment outcomes and measures can be linked. This paper explores the benefits and challenges of explicitly linking outcomes or measures in program assessment to university planning.


Sign in / Sign up

Export Citation Format

Share Document