Automated Essay Scoring

2021 ◽  
Vol 14 (5) ◽  
pp. 1-314
Author(s):  
Beata Beigman Klebanov ◽  
Nitin Madnani
PsycCRITIQUES ◽  
2004 ◽  
Vol 49 (Supplement 14) ◽  
Author(s):  
Steven E. Stemler

2009 ◽  
Author(s):  
Ronald T. Kellogg ◽  
Alison P. Whiteford ◽  
Thomas Quinlan

2019 ◽  
Vol 113 (1) ◽  
pp. 9-30
Author(s):  
Kateřina Rysová ◽  
Magdaléna Rysová ◽  
Michal Novák ◽  
Jiří Mírovský ◽  
Eva Hajičová

Abstract In the paper, we present EVALD applications (Evaluator of Discourse) for automated essay scoring. EVALD is the first tool of this type for Czech. It evaluates texts written by both native and non-native speakers of Czech. We describe first the history and the present in the automatic essay scoring, which is illustrated by examples of systems for other languages, mainly for English. Then we focus on the methodology of creating the EVALD applications and describe datasets used for testing as well as supervised training that EVALD builds on. Furthermore, we analyze in detail a sample of newly acquired language data – texts written by non-native speakers reaching the threshold level of the Czech language acquisition required e.g. for the permanent residence in the Czech Republic – and we focus on linguistic differences between the available text levels. We present the feature set used by EVALD and – based on the analysis – we extend it with new spelling features. Finally, we evaluate the overall performance of various variants of EVALD and provide the analysis of collected results.


2005 ◽  
Vol 33 (1) ◽  
pp. 101-113 ◽  
Author(s):  
P. Adam Kelly

Powers, Burstein, Chodorow, Fowles, and Kukich (2002) suggested that automated essay scoring (AES) may benefit from the use of “general” scoring models designed to score essays irrespective of the prompt for which an essay was written. They reasoned that such models may enhance score credibility by signifying that an AES system measures the same writing characteristics across all essays. They reported empirical evidence that general scoring models performed nearly as well in agreeing with human readers as did prompt-specific models, the “status quo” for most AES systems. In this study, general and prompt-specific models were again compared, but this time, general models performed as well as or better than prompt-specific models. Moreover, general models measured the same writing characteristics across all essays, while prompt-specific models measured writing characteristics idiosyncratic to the prompt. Further comparison of model performance across two different writing tasks and writing assessment programs bolstered the case for general models.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Wee Sian Wong ◽  
Chih How Bong

Automated Essay Scoring (AES) is the use of specialized computer programs to assign grades to essays written in an educational assessment context. It is developed to overcome time, cost, and reliability issues in writing assessment. Most of the contemporary AES are “western” proprietary product, designed for native English speakers, where the source code is not made available to public and the assessment criteria may tend to be associated with the scoring rubrics of a particular English test context. Therefore, such AES may not be appropriate to be directly adopted in Malaysia context. There is no actual software development work found in building an AES for Malaysian English test environment. As such, this work is carried out as the study for formulating the requirement of a local AES, targeted for Malaysia's essay assessment environment. In our work, we assessed a well-known AES called LightSide for determining its suitability in our local context. We use various Machine Learning technique provided by LightSide to predict the score of Malaysian University English Test (MUET) essays; and compare its performance, i.e. the percentage of exact agreement of LightSide with the human score of the essays. Besides, we review and discuss the theoretical aspect of the AES, i.e. its state-of-the-art, reliability and validity requirement. The finding in this paper will be used as the basis of our future work in developing a local AES, namely Intelligent Essay Grader (IEG), for Malaysian English test environment.


2021 ◽  
Author(s):  
Jinghua Gao ◽  
Qichuan Yang ◽  
Yang Zhang ◽  
Liuxin Zhang ◽  
Siyun Wang

Sign in / Sign up

Export Citation Format

Share Document