scholarly journals PSIV-B-32 Late-Breaking: Consistency of industry related terminology utilized in assessment questions across instructors of an introductory animal science course

2019 ◽  
Vol 97 (Supplement_3) ◽  
pp. 324-325
Author(s):  
Kirstin M Burnett ◽  
Leslie Frenzel ◽  
Wesley S Ramsey ◽  
Kathrin Dunlap

Abstract The consistency of instruction between various sections of introductory courses is a concern in higher education, along with properly preparing students to enter careers in industry. The study was conducted at Texas A&M University, using an introductory course, General Animal Science, within the Department of Animal Science. This course was chosen due to the utilization of specific animal science industry related terminology within the course content in support of learning outcomes. The study was a quantitative nonexperimental research method that was conducted over a single semester in 2018. General Animal Science is a large-scale course that contains multiple sections, and this study evaluated assessments created by individual faculty members who instructed different sections, Section A and Section B. These sections were selected as they were composed of both animal science majors and non-majors. Section A had a significantly higher (P < 0.001) number of majors versus non-majors than Section B. Assessment questions were collected from all examinations and quizzes distributed throughout the semester and were compiled into a single document for coding. These specific terms were chosen from literature to provide a benchmark for a potential relationship between student performance on questions containing industry related terminology as opposed to those that do not. Comparing the use of specific industry coded terminology in assessment questions yielded no significant difference (P < 0.05) between the two instructors or sections. These findings demonstrate consistent use of benchmarked industry related terminology in assessment questions across multiple sections, irrespective of individual instructor or student major. This provides a necessary foundation for future analysis of student performance.

RENOTE ◽  
2018 ◽  
Vol 15 (2) ◽  
Author(s):  
Gabriela Trindade Perry ◽  
Marcelo Leandro Eichler

This paper presents the results of a comparison study of the learning outcomes of two versions of the same educational game about periodic properties of chemical elements, called Xe nubi. Our aim was to test C lark's "method- not- media" hypothesis, which predicts there will be no difference in student performance if the method of instruction is the same, regardless of media types. This study took place in a public technical school in southern Brazil and sought differences in pre-test and post-test scores between groups that played Xenubi as printed cards or on the computer. The results point to no significant difference between the two groups, although there was difference in the pre and post-test, indicating the game was effective. This is a confirmation of C lark's hypothesis.


2017 ◽  
Vol 41 (1) ◽  
pp. 110-119 ◽  
Author(s):  
Jonathan D. Kibble

The goal of this review is to highlight key elements underpinning excellent high-stakes summative assessment. This guide is primarily aimed at faculty members with the responsibility of assigning student grades and is intended to be a practical tool to help throughout the process of planning, developing, and deploying tests as well as monitoring their effectiveness. After a brief overview of the criteria for high-quality assessment, the guide runs through best practices for aligning assessment with learning outcomes and compares common testing modalities. Next, the guide discusses the kind of validity evidence needed to support defensible grading of student performance. This review concentrates on how to measure the outcome of student learning; other reviews in this series will expand on the related concepts of formative testing and how to leverage testing for learning.


Author(s):  
Tzy-Ling Chen ◽  
Yu-Li Lan

<p>Since the introduction of personal response systems (PRS) (also referred to as "clickers") nearly a decade ago, their use has been extensively adopted on college campuses, and they are particularly popular with lecturers of large classes. Available evidence supports that PRS offers a promising avenue for future developments in pedagogy, although findings on the advantages of its effective use related to improving or enhancing student learning remain inconclusive. This study examines the degree to which students perceive that using PRS in class as an assessment tool effects their understanding of course content, engagement in classroom learning, and test preparation. Multiple, student-performance evaluation data was used to explore correlations between student perceptions of PRS and their actual learning outcomes. This paper presents the learning experiences of 151 undergraduate students taking basic chemistry classes and incorporating PRS as an in-class assessment tool at the National Chung Hsing University in Taiwan. While the research revealed positive student perceived benefits and effectiveness of PRS use, it also indicated the need for further studies to discover what specific contribution PRS can make to certain learning outcomes of a large chemistry class in higher education.</p><br />


2015 ◽  
Vol 19 (2) ◽  
Author(s):  
Joseph Cavanaugh ◽  
Stephen J Jacquemin

Comparisons of grade based learning outcomes between online and face-to-face course formats have become essential because the number of online courses, online programs and institutional student enrollments have seen rapid growth in recent years. Overall, online education is largely viewed by education professionals as being equivalent to instruction conducted face-to-face. However, the research investigating student performance in online versus face-to-face courses has been mixed and is often hampered by small samples or a lack of demographic and academic controls. This study utilizes a dataset that includes over 5,000 courses taught by over 100 faculty members over a period of ten academic terms at a large, public, four-year university. The unique scale of the dataset facilitates macro level understanding of course formats at an institutional level. Multiple regression was used to account for student demographic and academic corollaries—factors known to bias course format selection and grade based outcomes—to generate a robust test for differences in grade based learning outcomes that could be attributed to course format. The final model identified a statistical difference between course formats that translated into a negligible difference of less than 0.07 GPA points on a 4 point scale. The primary influence on individual course grades was student GPA. Interestingly, a model based interaction between course type and student GPA indicated a cumulative effect whereby students with higher GPAs will perform even better in online courses (or alternatively, struggling students perform worse when taking courses in an online format compared to a face-to-face format). These results indicate that, given the large scale university level, multi course, and student framework of the current study, there is little to no difference in grade based student performance between instructional modes for courses where both modes are applicable.


1998 ◽  
Vol 25 (2) ◽  
pp. 89-96 ◽  
Author(s):  
Benjamin Miller ◽  
Barbara F. Gentile

A nationwide survey of introductory psychology instructors showed that introductory courses are remarkably uniform in structure and content, with few differences across instructors and institutions. Instructors' most important goal was to “engage students in scientific inquiry about psychological processes,” but instructors said that what the course does best is to survey the field and the different approaches to it. A survey of introductory students showed that most expected to learn about people and relationships and to gain useful skills and knowledge. At the end of the term, most described the course as a survey, and the course fell short of many of their expectations.


2020 ◽  
Vol 185 (3-4) ◽  
pp. e358-e363
Author(s):  
Erin S Barry ◽  
Ting Dong ◽  
Steven J Durning ◽  
Deanna Schreiber-Gregory ◽  
Dario Torre ◽  
...  

Abstract Introduction Any implicit and explicit biases that exist may alter our interpretation of people and events. Within the context of assessment, it is important to determine if biases exist and to decrease any existing biases, especially when rating student performance to provide meaningful, fair, and useful input. The purpose of this study was to determine if the experience and gender of faculty members contribute to their ratings of students in a military medical field practicum. This information is important for fair ratings of students. Three research questions were addressed: Were there differences between new versus experienced faculty raters? Were there differences in assessments provided by female and male faculty members? Did gender of faculty raters impact ratings of female and male students?. Materials and Methods This study examined trained faculty evaluators’ ratings of three cohorts of medical students during 2015–2017 during a medical field practicum. Female (n = 80) and male (n = 161) faculty and female (n = 158) and male (n = 311) students were included. Within this dataset, there were 469 students and 241 faculty resulting in 5,599 ratings for each of six outcome variables that relate to overall leader performance, leader competence, and leader communication. Descriptive statistics were computed for all variables for the first four observations of each student. Descriptive analyses were performed for evaluator experience status and gender differences by each of the six variables. A multivariate analyses of variance was performed to examine whether there were differences between gender of faculty and gender of students. Results Descriptive analyses of the experience status of faculty revealed no significant differences between means on any of the rating elements. Descriptive analyses of faculty gender revealed no significant differences between female and male faculty ratings of the students. The overall MANOVA analyses found no statistically significant difference between female and male students on the combined dependent variables of leader performance for any of the four observations. Conclusions The study revealed that there were no differences in ratings of student leader performance based on faculty experience. In addition, there were no differences in ratings of student leader performance based on faculty gender.


2018 ◽  
Author(s):  
Joseph DeWilde ◽  
Esha Rangnekar ◽  
Jeffrey Ting ◽  
Joseph Franek ◽  
Frank S. Bates ◽  
...  

A biannual chemistry demonstration-based show named “Energy and U” was created to extend the general outreach themes of STEM fields and a college education with a specific goal: to teach the First Law of Thermodynamics to elementary school students. Energy is a central concept in chemical education, most STEM disciplines, and it is the concept at the foundation of many of the greatest challenges faced by society today. The effectiveness of the program was analyzed using a clicker survey system. This study provides one of the first examples of incorporating real-time feedback into large- scale chemistry-based outreach events for elementary school students in order to quantify and better understand the broader impact and learning outcomes.


Akademika ◽  
2019 ◽  
Vol 8 (01) ◽  
pp. 81-100
Author(s):  
Eva Kristiyani ◽  
Iffah Budiningsih

The aim of this research is to know the influence of e-learning learning strategy and interest in learning to accounting learning result. This research was conducted at SMK Permata Bangsa Kelurahan Jakasetia, South Bekasi Subdistrict, Bekasi City involving 56 samples taken with random sampling technique to the equivalent class. Instrument used in this research is the accounting test and questionnaire interest in student learning; and the data analysis using two-way ANAVA and Tukey Test. The results of this study obtained: (1) there is a significant difference between the learning outcomes of students who are taught with e-learning learning strategies and expository strategies in which the results of student accounting learning taught by e-learning strategy is higher than the students taught by strategy expository learning. (2) There is an interaction between students who are taught using learning strategies with interest in learning on accounting learning outcomes. (3) This means that the result of group accounting learning which is taught using e-learning learning strategy is significantly higher than that taught using expository learning strategy in students who have high learning interest. (4) While the learning result of student group accounting that is taught using e-learning strategy is same as learning result which is taught using expository learning strategy to students who have low learning interest, influenced by student environment factor and learning design factor in research.


2018 ◽  
Vol 16 (1) ◽  
pp. 67-76
Author(s):  
Disyacitta Neolia Firdana ◽  
Trimurtini Trimurtini

This research aimed to determine the properness and effectiveness of the big book media on learning equivalent fractions of fourth grade students. The method of research is Research and Development  (R&D). This study was conducted in fourth grade of SDN Karanganyar 02 Kota Semarang. Data sources from media validation, material validation, learning outcomes, and teacher and students responses on developed media. Pre-experimental research design with one group pretest-posttest design. Big book developed consist of equivalent fractions material, students learning activities sheets with rectangle and circle shape pictures, and questions about equivalent fractions. Big book was developed based on students and teacher needs. This big book fulfill the media validity of 3,75 with very good criteria and scored 3 by material experts with good criteria. In large-scale trial, the result of students posttest have learning outcomes completness 82,14%. The result of N-gain calculation with result 0,55 indicates the criterion “medium”. The t-test result 9,6320 > 2,0484 which means the average of posttest outcomes is better than the average of pretest outcomes. Based on that data, this study has produced big book media which proper and effective as a media of learning equivalent fractions of fourth grade elementary school.


Sign in / Sign up

Export Citation Format

Share Document