scholarly journals Using a personal response system as an in-class assessment tool in the teaching of basic college chemistry

Author(s):  
Tzy-Ling Chen ◽  
Yu-Li Lan

<p>Since the introduction of personal response systems (PRS) (also referred to as "clickers") nearly a decade ago, their use has been extensively adopted on college campuses, and they are particularly popular with lecturers of large classes. Available evidence supports that PRS offers a promising avenue for future developments in pedagogy, although findings on the advantages of its effective use related to improving or enhancing student learning remain inconclusive. This study examines the degree to which students perceive that using PRS in class as an assessment tool effects their understanding of course content, engagement in classroom learning, and test preparation. Multiple, student-performance evaluation data was used to explore correlations between student perceptions of PRS and their actual learning outcomes. This paper presents the learning experiences of 151 undergraduate students taking basic chemistry classes and incorporating PRS as an in-class assessment tool at the National Chung Hsing University in Taiwan. While the research revealed positive student perceived benefits and effectiveness of PRS use, it also indicated the need for further studies to discover what specific contribution PRS can make to certain learning outcomes of a large chemistry class in higher education.</p><br />

2019 ◽  
Vol 2 (1) ◽  
pp. 109-119
Author(s):  
Corinne M Gist ◽  
Natalie Andzik ◽  
Elle E Smith ◽  
Menglin Xu ◽  
Nancy A Neef

The use of competitive games to increase classroom engagement has become common practice among many teachers. However, it is unclear if using games as an assessment tool is a viable way to increase student performance. This study examined the effects of administering quizzes through a game-based system, Kahoot!,versusprivately on an electronic device. The quiz scores of 56 undergraduate students, enrolled in one of two special education courses, were evaluated. A linear regression was used to compare student scores across the two conditions, as well as performance over the course of a 15-week semester. No significant difference in quiz scores was found between the two conditions, and quiz scores in both conditions improved similarly over time. Sixty-eight percent of the students reported preferring to take the quiz privately on an electric device as opposed to on Kahoot!. Limitations and recommendations for practitioners are discussed.


10.28945/2794 ◽  
2004 ◽  
Author(s):  
Glen Van Der Vyver ◽  
Michael Lane

The emergence of the Internet has made many institutions involved in the delivery of distance education programs re-evaluate the course delivery framework. A variety of models and techniques co-exist in an often uneasy alliance at many such institutions. These range from the traditional distance learning model, which remains paper-based, to the purely online model. Recently, hybrid models have emerged which apparently attempt to forge elements taken from several models into a unified whole. Many of these hybrid models seek to eliminate paper-based materials from the tuition process. While many arguments are put forward about the efficacy of purely electronic delivery mechanisms, cost containment is often the driving motivation. This study explores student perceptions of the various delivery mechanisms for distance learning materials. In particular, it seeks to determine what value students place on paper-based delivery mechanisms. The study surveys a group of undergraduate students and a group of graduate students enrolled in the Faculty of Business at a large regional Australian university.


2019 ◽  
Vol 97 (Supplement_3) ◽  
pp. 324-325
Author(s):  
Kirstin M Burnett ◽  
Leslie Frenzel ◽  
Wesley S Ramsey ◽  
Kathrin Dunlap

Abstract The consistency of instruction between various sections of introductory courses is a concern in higher education, along with properly preparing students to enter careers in industry. The study was conducted at Texas A&M University, using an introductory course, General Animal Science, within the Department of Animal Science. This course was chosen due to the utilization of specific animal science industry related terminology within the course content in support of learning outcomes. The study was a quantitative nonexperimental research method that was conducted over a single semester in 2018. General Animal Science is a large-scale course that contains multiple sections, and this study evaluated assessments created by individual faculty members who instructed different sections, Section A and Section B. These sections were selected as they were composed of both animal science majors and non-majors. Section A had a significantly higher (P &lt; 0.001) number of majors versus non-majors than Section B. Assessment questions were collected from all examinations and quizzes distributed throughout the semester and were compiled into a single document for coding. These specific terms were chosen from literature to provide a benchmark for a potential relationship between student performance on questions containing industry related terminology as opposed to those that do not. Comparing the use of specific industry coded terminology in assessment questions yielded no significant difference (P &lt; 0.05) between the two instructors or sections. These findings demonstrate consistent use of benchmarked industry related terminology in assessment questions across multiple sections, irrespective of individual instructor or student major. This provides a necessary foundation for future analysis of student performance.


2014 ◽  
Vol 22 (3) ◽  
pp. 212-225 ◽  
Author(s):  
Valerie Priscilla Goby ◽  
Catherine Nickerson

Purpose – This paper aims to focus on the successful efforts made at a university business school in the Gulf region to develop an assessment tool to evaluate the communication skills of undergraduate students as part of satisfying the Association to Advance Collegiate Schools of Business (AACSB) accreditation requirements. We do not consider the validity of establishing learning outcomes or meeting these according to AACSB criteria. Rather, we address ourselves solely to the design of a testing instrument that can measure the degree of student learning within the parameters of university-established learning outcomes. Design/methodology/approach – The testing of communication skills, as opposed to language, is notoriously complex, and we describe our identification of constituent items that make up the corpus of knowledge that business students need to attain. We discuss our development of a testing instrument which reflects the learning process of knowledge, comprehension and application. Findings – Our work acted as a valid indicator of the effectiveness of teaching and learning as well as a component of accreditation requirements. Originality/value – The challenge to obtain accreditation, supported by appropriate assessment procedures, is now a high priority for more and more universities in emerging, as well as in developed, economies. For business schools, the accreditation provided by AACSB remains perhaps the most sought after global quality assurance program, and our work illustrates how the required plotting and assessment of learning objectives can be accomplished.


2012 ◽  
Vol 7 (4) ◽  
pp. 152-156 ◽  
Author(s):  
Jatin P. Ambegaonkar ◽  
Shane Caswell ◽  
Amanda Caswell

Context: Approved Clinical Instructors (ACIs) are integral to athletic training students' professional development. ACIs evaluate student clinical performance using assessment tools provided by educational programs. How ACI ratings of a student's clinical performance relate to their clinical grade remains unclear. Objective: To examine relationships between ACI evaluations of student clinical performance using an athletic training-specific inventory (Athletic Training Clinical Performance Inventory; ATCPI) and the student's clinical grade (CG) over a clinical experience. Design: Correlational. Setting: Large metropolitan university. Participants: 48 ACIs (M=20; F=28; Certified for 7.5+3.2yrs; ACIs for 3.2+1.5yrs) evaluating 62 undergraduate students (M=20; F=42). Interventions: ACIs completed the ATCPI twice (mid-semester, and end-of semester) during their student's clinical experience. The ATCPI is a 21-item instrument: Items 1–20 assess the student's clinical performance based on specific constructs (Specific) and item 21 is a rating of the student's overall clinical performance (Overall). ACIs also assigned students a clinical grade (CG). Pearson product-moment correlations examined relationships between Specific, Overall, and CG, with separate paired t-tests examining differences (p&lt;.05). Main Outcome Measures: The ATCPI used a 4-point Likert-type scale anchored by 1 (Rarely) and 4 (Consistently), and CG (A=4, B=3, C=2 D =1, 0=F). Results: Two-hundred and sixty-six ATCPI instruments were completed over 4 academic years. The ATCPI demonstrated acceptable reliability (Cronbach's alpha=.88). All three measures were positively correlated (Specific and Overall, r(264)=.65, P &lt;.001; Specific and CG r(264)=.63, P &lt;.001; Overall and CG r(264)=.55, P&lt;.001). No differences existed between Specific (3.5±0.4) and CG (3.5±0.7; t=.60, P =.55). However, Overall (3.6±0.7) was significantly higher than both Specific (t=−3.45, P&lt;.000) and CG (t=2.05, P =.04). Conclusions: ACIs reliably assessed students' specific clinical performance and provided a relatively accurate grade. However, since the overall scores were higher than specific item scores, ACIs overestimated students' overall clinical performance. Additional research is necessary to examine the ATCPI as an assessment tool across multiple institutions and to determine how other variables affect ACI assessments of student performance.


2021 ◽  
Vol 45 (1) ◽  
pp. 10-17
Author(s):  
Patricia A. Halpin ◽  
Jeremiah Johnson ◽  
Emilio Badoer

Engaging undergraduate students in large classes is a constant challenge for many lecturers, as student participation and engagement can be limited. This is a concern since there is a positive correlation between increased engagement and student success. The lack of student feedback on content delivery prevents lecturers from identifying topics that would benefit students if reviewed. Implementing novel methods to engage the students in course content and create ways by which they can inform the lecturer of the difficult concepts is needed to increase student success. In the present study, we investigated the use of Twitter as a scalable approach to enhance engagement with course content and peer-to-peer interaction in a large course. In this pilot study, students were instructed to tweet the difficult concepts identified from content delivered by videos. A software program automatically collected and parsed the tweets to extract summary statistics on the most common difficult concepts, and the lecturer used the information to prepare face-to-face (F2F) lectorial sessions. The key findings of the study were 1) the uptake of Twitter (i.e., registration on the platform) was similar to the proportion of students who participated in F2F lectorials, 2) students reviewed content soon after delivery to tweet difficult concepts to lecturer, 3) Twitter increased engagement with lecturers, 4) the difficult concepts were similar to previous years, yet the automated gathering of Twitter data was more efficient and time saving for the lecturer, and 5) students found the lectorial review sessions very valuable.


2019 ◽  
Vol 32 (2) ◽  
pp. 166-180
Author(s):  
Rania Mousa

Purpose The purpose of this paper is to examine the learning outcomes of students enrolled in an introductory financial accounting course through their experience of playing the Monopoly™ board game and map those outcomes to a selected number of individual competency types addressed in the AICPA Core Competency Framework. Design/methodology/approach A longitudinal qualitative analysis was performed to analyze self-reported learning outcomes collected from undergraduate students enrolled in an introductory financial accounting course. Content analysis and participant observations were utilized to inform the analysis process and derive research findings. Findings The findings reveal a connection between the learning outcomes and a selected number of individual competency types addressed in the AICPA Framework. The findings also reemphasize the importance of utilizing some of the basic functions and features of Excel to augment foundational financial accounting knowledge and enhance professional skills. Originality/value Although the use of board games in accounting education was examined in prior research, this paper provides an empirical evidence on the alignment of self-reported learning outcomes of a popular board game to a notable profession-driven framework. In addition to bridging a potential gap between the accounting education and profession, this study informs academics as to the implications of engaging students in a class activity that applies basic financial accounting and computer knowledge.


Author(s):  
Kristian J Sund

This chapter discusses the possible detrimental effects of low attendance on the achievement of important learning outcomes in terms of “soft” employability-enhancing skills among undergraduate students in business schools, and explores how the use of learning technologies may contribute to high or low class attendance levels. The chapter describes the exploratory results of a survey carried out among final year bachelor students attending a strategic management course, the findings of which suggest that a significant number of students view virtual learning environments as a substitute for lectures. I find only very limited evidence that such students actually attend classes any less than other students do. Furthermore, I find that reasons for non-attendance are similar to those reported in existing literature.


Author(s):  
Khadernawaz Khan ◽  
Umamaheswara Rao Bontha

Writing is a deciding factor for academic success among tertiary-level students. Developing the writing skill of learners at the foundation level plays a significant role in their academic career. In teaching writing, a debatable issue has been whether to use a process or product approach. While some researchers contend that a process approach helps develop writing among ESL/EFL learners, others argue that the product is more important than the process. However, process without product would be aimless and a product without a process would be hollow. This chapter deals with the writing module taught across the three levels of the Foundation Program at Oman's Dhofar University. It focuses on how writing course content, learning outcomes, writing portfolios, and assessment procedures are addressed and how the process and product approaches are blended to achieve learning outcomes. Teacher and student perceptions on how this approach helps are analyzed and discussed.


2016 ◽  
Vol 58 (1) ◽  
pp. 82-93 ◽  
Author(s):  
Jonathan M Scott ◽  
Andy Penaluna ◽  
John L Thompson

Purpose – The purpose of this paper is to conduct a critical appraisal of how experiential approaches can more effectively enhance the achievement of desired learning outcomes in entrepreneurship education. In particular, the authors critique whether actual learning outcomes can be profitably used to measure effectiveness; and consider how student performance can be evaluated through the twin lenses of implementation or innovation. Design/methodology/approach – The authors undertook a review of both traditional and experiential approaches to entrepreneurship education. In addition to comparing these approaches, the authors critiqued a number of “taken for granted” assumptions regarding the effectiveness of experiential approaches to entrepreneurship education and made recommendations. Findings – Although there is a large body of research on experiential approaches towards entrepreneurship education, the authors know little about how these approaches contribute towards the effective achievement of desired learning outcomes. Whilst many authors claim that such approaches are effective, such assertions are not supported by sufficient robust evidence. Hence the authors need to establish more effective student performance evaluation metrics. In particular: first, whether actual learning outcomes are appropriate measures of effectiveness; and second, the authors should evaluate student performance through the lenses of the two “Is” – implementation or innovation. Practical implications – Whether actual learning outcomes are used as a measure of effectiveness at all needs to be critiqued further. Implementation involves doing things that are determined by others and matching against their expectations, whereas innovation comprises producing multiple and varied solutions that respond to change and often surprise. Originality/value – Through revisiting the discussions on the art and the science of entrepreneurship education, this paper represents an initial critical attempt – as part of an ongoing study – to fill a gap in entrepreneurship education research. The paper, therefore, has significant value for students, entrepreneurship educators and policy-makers.


Sign in / Sign up

Export Citation Format

Share Document