essay scoring
Recently Published Documents


TOTAL DOCUMENTS

296
(FIVE YEARS 109)

H-INDEX

18
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Anubha Kabra ◽  
Mehar Bhatia ◽  
Yaman Kumar Singla ◽  
Junyi Jessy Li ◽  
Rajiv Ratn Shah

Psych ◽  
2021 ◽  
Vol 3 (4) ◽  
pp. 897-915
Author(s):  
Sabrina Ludwig ◽  
Christian Mayer ◽  
Christopher Hansen ◽  
Kerstin Eilers ◽  
Steffen Brandt

Automated essay scoring (AES) is gaining increasing attention in the education sector as it significantly reduces the burden of manual scoring and allows ad hoc feedback for learners. Natural language processing based on machine learning has been shown to be particularly suitable for text classification and AES. While many machine-learning approaches for AES still rely on a bag of words (BOW) approach, we consider a transformer-based approach in this paper, compare its performance to a logistic regression model based on the BOW approach, and discuss their differences. The analysis is based on 2088 email responses to a problem-solving task that were manually labeled in terms of politeness. Both transformer models considered in the analysis outperformed without any hyperparameter tuning of the regression-based model. We argue that, for AES tasks such as politeness classification, the transformer-based approach has significant advantages, while a BOW approach suffers from not taking word order into account and reducing the words to their stem. Further, we show how such models can help increase the accuracy of human raters, and we provide a detailed instruction on how to implement transformer-based models for one’s own purposes.


2021 ◽  
Vol 5 (4) ◽  
pp. 279-292
Author(s):  
Raden Ahmad Hadian Adhy Permana* ◽  
Ari Widodo ◽  
Wawan Setiawan ◽  
Siti Sriyati

Measuring the ability of teachers is part of the evaluation in the national education system. The development of the instrument in the form of essay questions has become an alternative because the instrument is considered to have better authenticity than the form of multiple choice questions. The implications of using the essay instrument were scored consistency and efficient resources. The purpose of this study was to examine the inter-rater reliability between the automated essay scoring system and human raters in measuring teachers’ knowledge using the essay instrument. The research was conducted randomly with 200 junior high school science teachers. The quantitative method design was applied to investigate the intra-class correlation coefficient (ICC) and Pearson's correlation (r) as indicators of the ability of automated essay scoring (UKARA) that had been used. The main data in this study were test answers from participants in the form of limited essay answers that have been distributed online. The inter-rater reliability coefficient between the UKARA and the human rater in this study was in the high category (more than 0.7) for all items or means that the score given by UKARA has a strong correlation with the score given by human rater. These results mean that UKARA has adequate capability as an automated essay scoring system on the measurement of science teacher knowledge. 


2021 ◽  
Author(s):  
Lulu Dong ◽  
Lin Li ◽  
HongChao Ma ◽  
YeLing Liang

Automated Essay Scoring (AES) aims to assign a proper score to an essay written by a given prompt, which is a significant application of Natural Language Processing (NLP) in the education area. In this work, we focus on solving the Chinese AES problem by Pre-trained Language Models (PLMs) including state-of-the-art PLMs BERT and ERNIE. A Chinese essay dataset has been built up in this work, by which we conduct extensive AES experiments. Our PLMs-based AES models acquire 68.70% in Quadratic Weighted Kappa (QWK), which outperform classic feature-based linear regression AES model. The results show that our methods effectively alleviate the dependence on manual features and improve the portability of AES models. Furthermore, we acquire well-performed AES models with a limited scale of the dataset, which solves the lack of datasets in Chinese AES.


2021 ◽  
Vol 14 (5) ◽  
pp. 1-314
Author(s):  
Beata Beigman Klebanov ◽  
Nitin Madnani

2021 ◽  
Author(s):  
Paraskevas Lagakis ◽  
Stavros Demetriadis

2021 ◽  
Vol 24 (4) ◽  
pp. 223-238
Author(s):  
Kangyun Park ◽  
Yongsang Lee ◽  
Dongkwang Shin

Author(s):  
George Pashev ◽  
Silvia Gaftandzhieva ◽  
Yuri Hopteriev

The paper presents a methodology and an application framework (PUAnalyzeThis) that makes use of MeaningCloud API to automatically extract entities, concepts, relations, etc. and calculate scores and grades based on their relevance to a preliminary created topics graph. These topic graphs are either to be created by the teacher or automatically generated by scanning a certain amount of sample texts in the subject area. A prototype has been developed and tested with essays in the field of Computational Linguistics for informatics students at the University of Plovdiv “Paisii Hilendarski”.


Sign in / Sign up

Export Citation Format

Share Document