Assessment of Student Learning Through Reflection on Doing in Engineering Design

2021 ◽  
Author(s):  
Yanwei Sun ◽  
Shan Peng ◽  
Zachary Ball ◽  
Zhenjun Ming ◽  
Janet K. Allen ◽  
...  

Abstract How can instructors leverage assessment instruments in design, build, and test courses to simultaneously improve student outcomes and assess student learning to improve courses? A Take-away is one type of assessment method. It is unstructured text written by a student in AME4163: Principles of Engineering Design, the University of Oklahoma, Norman, US to record what they understand by reflecting on authentic, immersive experiences throughout the semester. The immersive experiences include lectures, assignments, reviews, building, testing, and a post-analysis for the design of an electro-mechanical system to address a given customer need. In the context of a Take-away, a student then writes a Learning Statement. The Learning Statement is a single sentence written as a triple, i.e., Experience|Learning|Value. Over the past three years at the University of Oklahoma (OU), we collected about 18,000 Take-aways and 18,000 Learning Statements from almost 400 students. In our earlier papers, we primarily concentrate on analyzing students’ Learning Statements by a text mining framework. In this paper, we focus on analyzing students’ Take-aways data using a Latent Dirichlet Allocation (LDA) algorithm, and then relate the Take-away data to the instructor’s expectations using text similarity. By connecting and comparing what students learned (embodied in Take-aways) and what instructors expected the students to learn (embodied in the course booklet), we provide evidence-based guidance to instructors on improving the delivery of AME4163: Principles of Engineering Design. The proposed method can be generalized to be used for the assessment of ABET Student Outcomes 2 and 7.

Author(s):  
Shan Peng ◽  
Zhenjun Ming ◽  
Janet K. Allen ◽  
Zahed Siddique ◽  
Farrokh Mistree

Abstract In this paper we address the following question: How can instructors leverage assessment instruments in design, build, and test courses to simultaneously improve student outcomes and assess student learning well enough to improve courses for future students? A learning statement is a structured text-based construct for students to record what they learned by reflecting on authentic immersive experiences in a semester-long engineering design course. The immersive experiences include lectures, assignments, reviews, building, testing, and a post-analysis of an electro-mechanical device to address a given customer need. Over the past three years, in the School of Aerospace and Mechanical Engineering at the University of Oklahoma, Norman, we have collected almost 30,000 learning statements from almost 400 students. In the past few years, we have analyzed this data to improve our understanding of what students have learned by reflecting on doing and thence how we might improve the delivery of the course. In an earlier paper, we described a text mining framework to facilitate the analysis of a vast number of learning statements. Our focus, in the earlier paper, was on describing the functionalities (i.e., data cleaning, data management, text analysis, and visualization results) of the framework and demonstrating one of the text quantification methods — term frequency — using the learning statements. In this paper, we focus on demonstrating another text quantification method, namely, text similarity, to facilitate instructors’ gaining new insights from students’ learning statements. In the method of text similarity, we measure the cosine distance between two text vectors and is typically used to compare the semantic similarity between documents. In this paper, we compare the similarity between what students learned (embodied in learning statements) and what instructors expected the students to learn (embodied in the course booklet), thus providing evidence-based guidance to instructors on how to improve the delivery of AME4163 – Principles of Engineering Design.


Gamification ◽  
2015 ◽  
pp. 1865-1880
Author(s):  
Alex Moseley

There is growing interest in assessment of student learning within education, not least because assessment practice within some sectors (the UK higher education sector for example) is stagnant: many courses designed independently to the assessment method and assessed through a small number of traditional methods. Games-based learning has shown little deviation from this pattern – games themselves often removed from assessment of the skills they are designed to teach, and in the worst cases from the intended learning outcomes: gamification being a particularly formulaic example. This chapter makes the case for an integrated approach to assessment within learning games and the wider curriculum, drawing on elements within game design that provide natural opportunity for such integration. To demonstrate and evaluate such an approach, integrated assessment case studies (including a full study from the University of Leicester) are presented and discussed.


2020 ◽  
Vol 13 (5) ◽  
pp. 13
Author(s):  
Ala Alluhaidan ◽  
Evon M. Abu-Taieh

The rapid growth of technology and related fields has led to creation in academia to match the expansion demand for IT professionals. One of the current majors that attract attention in industry is cybersecurity. There is a great need for individuals who are skilled in cybersecurity to protect IT infrastructure. Coaching a security-focused workforce has become the target of government agents, industry, and academic institutions. As research and academic faculties respond to this growing demand, evolving curriculum and methodologies for teaching cybersecurity graduates still need to be formed comprehensively. There have been few researches to define and assess the student outcomes in cybersecurity. This paper presents a step forward by developing Student Learning Outcomes (SLOs) and the desired assessment method to measure those outcomes. This research contributes to academia and training institution by defining the SLOs and suggests preferable assessment methods in cybersecurity. This initial research is based on a qualitative study of academician evaluation of cybersecurity courses. The paper presents the result of interviews along with discussions of ongoing and future suggestions.


Author(s):  
Alex Moseley

There is growing interest in assessment of student learning within education, not least because assessment practice within some sectors (the UK higher education sector for example) is stagnant: many courses designed independently to the assessment method and assessed through a small number of traditional methods. Games-based learning has shown little deviation from this pattern – games themselves often removed from assessment of the skills they are designed to teach, and in the worst cases from the intended learning outcomes: gamification being a particularly formulaic example. This chapter makes the case for an integrated approach to assessment within learning games and the wider curriculum, drawing on elements within game design that provide natural opportunity for such integration. To demonstrate and evaluate such an approach, integrated assessment case studies (including a full study from the University of Leicester) are presented and discussed.


AERA Open ◽  
2017 ◽  
Vol 3 (1) ◽  
pp. 233285841769011 ◽  
Author(s):  
Su Swarat ◽  
Pamella H. Oliver ◽  
Lisa Tran ◽  
J. G. Childers ◽  
Binod Tiwari ◽  
...  

Assessment of student learning outcomes (SLOs) has become increasingly important in higher education. Meaningful assessment (i.e., assessment that leads to the improvement of student learning) is impossible without faculty engagement. We argue that one way to elicit genuine faculty engagement is to embrace the disciplinary differences when implementing a universitywide SLO assessment process so that the process reflects discipline-specific cultures and practices. Framed with Biglan’s discipline classification framework, we adopt a case-study approach to examine the SLO assessment practices in four undergraduate academic programs: physics, history, civil engineering, and child and adolescent studies. We demonstrate that one key factor for these programs’ success in developing and implementing SLO assessment under a uniform framework of university assessment is their adaptation of the university process to embrace the unique disciplinary differences.


2013 ◽  
Vol 94 (10) ◽  
pp. 1501-1506 ◽  
Author(s):  
Bradley G. Illston ◽  
Jeffrey B. Basara ◽  
Christopher Weiss ◽  
Mike Voss

The WxChallenge, a project developed at the University of Oklahoma, brings a state-of-the-art, fun, and exciting forecast contest to participants at colleges and universities across North America. The challenge is to forecast the maximum and minimum temperatures, precipitation, and maximum wind speeds for select locations across the United States over a 24-h prediction period. The WxChallenge is open to all undergraduate and graduate students, as well as higher-education faculty, staff, and alumni. Through the use of World Wide Web interfaces accessible by personal computers, tablet computer, and smartphones, the WxChallenge provides a state-of-the-art portal to aid participants in submitting forecasts and alleviate many of the administrative issues (e.g., tracking and scoring) faced by local managers and professors. Since its inception in 2006, 110 universities have participated in the contest and it has been utilized as part of the curricula for 140 classroom courses at various institutions. The inherently challenging nature of the WxChallenge has encouraged its adoption as an educational tool. As its popularity has grown, professors have seen the utility of the Wx-Challenge as a teaching aid and it has become an instructional resource of many meteorological classes at institutions for higher learning. In addition to evidence of educational impacts, the competition has already begun to leave a cultural and social mark on the meteorological learning experience.


2020 ◽  
Vol 11 (1) ◽  
pp. 237
Author(s):  
Abdallah Namoun ◽  
Abdullah Alshanqiti

The prediction of student academic performance has drawn considerable attention in education. However, although the learning outcomes are believed to improve learning and teaching, prognosticating the attainment of student outcomes remains underexplored. A decade of research work conducted between 2010 and November 2020 was surveyed to present a fundamental understanding of the intelligent techniques used for the prediction of student performance, where academic success is strictly measured using student learning outcomes. The electronic bibliographic databases searched include ACM, IEEE Xplore, Google Scholar, Science Direct, Scopus, Springer, and Web of Science. Eventually, we synthesized and analyzed a total of 62 relevant papers with a focus on three perspectives, (1) the forms in which the learning outcomes are predicted, (2) the predictive analytics models developed to forecast student learning, and (3) the dominant factors impacting student outcomes. The best practices for conducting systematic literature reviews, e.g., PICO and PRISMA, were applied to synthesize and report the main results. The attainment of learning outcomes was measured mainly as performance class standings (i.e., ranks) and achievement scores (i.e., grades). Regression and supervised machine learning models were frequently employed to classify student performance. Finally, student online learning activities, term assessment grades, and student academic emotions were the most evident predictors of learning outcomes. We conclude the survey by highlighting some major research challenges and suggesting a summary of significant recommendations to motivate future works in this field.


Sign in / Sign up

Export Citation Format

Share Document