automated assessment
Recently Published Documents


TOTAL DOCUMENTS

637
(FIVE YEARS 197)

H-INDEX

30
(FIVE YEARS 7)

Author(s):  
Shilpaa Mukundan ◽  
Jordan Bell ◽  
Matthew Teryek ◽  
Charles Hernandez ◽  
Andrea C. Love ◽  
...  

2022 ◽  
Author(s):  
Thomas David ◽  
Robert J. Clarke ◽  
Artur Uzieblo ◽  
Dimitris Panayiotou ◽  
Thomas S. Richardson
Keyword(s):  

2022 ◽  
Author(s):  
Ricky E. Putra ◽  
Ekohariadi Ekohariadi ◽  
I K. D. Nuryana ◽  
Yeni Anistyasari

2022 ◽  
Author(s):  
Yong-Bin Kang ◽  
Abdur Forkan ◽  
Prem Prakash Jayaraman ◽  
Hung Du ◽  
Rohit Kaul ◽  
...  

2021 ◽  
Author(s):  
Adrien Meynard ◽  
Gayan Seneviratna ◽  
Elliot Doyle ◽  
Joyanne Becker ◽  
Hau-Tieng Wu ◽  
...  

2021 ◽  
Vol 37 (5) ◽  
pp. 98-115
Author(s):  
Rick Somers ◽  
Samuel Cunningham-Nelson ◽  
Wageeh Boles

In this study, we applied natural language processing (NLP) techniques, within an educational environment, to evaluate their usefulness for automated assessment of students’ conceptual understanding from their short answer responses. Assessing understanding provides insight into and feedback on students’ conceptual understanding, which is often overlooked in automated grading. Students and educators benefit from automated formative assessment, especially in online education and large cohorts, by providing insights into conceptual understanding as and when required. We selected the ELECTRA-small, RoBERTa-base, XLNet-base and ALBERT-base-v2 NLP machine learning models to determine the free-text validity of students’ justification and the level of confidence in their responses. These two pieces of information provide key insights into students’ conceptual understanding and the nature of their understanding. We developed a free-text validity ensemble using high performance NLP models to assess the validity of students’ justification with accuracies ranging from 91.46% to 98.66%. In addition, we proposed a general, non-question-specific confidence-in-response model that can categorise a response as high or low confidence with accuracies ranging from 93.07% to 99.46%. With the strong performance of these models being applicable to small data sets, there is a great opportunity for educators to implement these techniques within their own classes. Implications for practice or policy: Students’ conceptual understanding can be accurately and automatically extracted from their short answer responses using NLP to assess the level and nature of their understanding. Educators and students can receive feedback on conceptual understanding as and when required through the automated assessment of conceptual understanding, without the overhead of traditional formative assessment. Educators can implement accurate automated assessment of conceptual understanding models with fewer than 100 student responses for their short response questions.


2021 ◽  
Author(s):  
Walid A. Zgallai ◽  
Teye Brown ◽  
Entesar Z. Dalah ◽  
Abdulmunhem K. Obaideen ◽  
MoezAlIslam E. Faris

Author(s):  
Rhaydae Jebli ◽  
Jaber El Bouhdidi ◽  
Mohamed Yassin Chkouri

Today educational technology and computer applications can enhance the level of impact of the educational process, currently a heavy research has more and more interested in computing technologies and applications. This work presents a proposed architecture of an intelligent system for the automated assessment of student's production when modeling an UML class diagram from textual specification, this assessment becomes really difficult in problem of diagrammatic answers. The main objective is to develop a system to assess the UML class diagrams by defining differences, errors made by the students, grading them and providing critical feedbacks, and this could be easier for any teachers to assess any number of students. To achieve this goal we have to analyze, transform and compare the student’s diagram with the reference’s production provided by teacher.


Sign in / Sign up

Export Citation Format

Share Document