Automated Essay Scoring in a High Stakes Testing Environment

Author(s):  
Mark D. Shermis
2017 ◽  
Vol 22 (6) ◽  
pp. 324
Author(s):  
Anthony Fernandes ◽  
Natasha Murray ◽  
Terrence Wyberg

In the current high–stakes testing environment, a mention of assessment is inevitably associated with large–scale summative assessments at the end of the school year. Although these assessments serve an important purpose, assessing students' learning is an ongoing process that takes place in the classroom on a regular basis. Effectively gathering information about student understanding is integral to all aspects of mathematics instruction. Formative assessments conducted in the classroom have the potential to provide important feedback about students' understanding, guide future instruction to improve student learning, and provide roadmaps for both teachers and students in the process of learning.


Author(s):  
Jinnie Shin ◽  
Qi Guo ◽  
Mark J. Gierl

The recent transition from paper to digitally based assessment has brought many positive changes in educational testing. For example, many high-stakes exams have started implementing essay-type questions because they allow students to creatively express their understanding with their own words. To reduce the burden of scoring these items, the implementation of automated essay scoring (AES) systems have gained more attention. However, despite some of the successful demonstrations, AES still encountered many criticisms from practitioners. Such concerns often include prediction accuracy and interpretability of the scoring algorithms. Hence, overcoming these challenges is critical for AES to be widely adopted in the field. The purpose of this chapter is to introduce deep learning AES models and to describe how certain aspects of the models can be used to overcome the challenges of prediction accuracy and interpretability of the scoring algorithms.


2015 ◽  
Vol 1 (1) ◽  
pp. 42
Author(s):  
Corrie Rebecca Block

<p style="margin: 0in 0in 10pt;"> </p><p style="margin: 0in 0in 10pt;"><span style="font-family: Times New Roman; font-size: medium;">In order to succeed in the current school assessment and accountability era, a public Montessori school is expected to achieve high student scores on standardized assessments. A problem for a public Montessori elementary school is how to make sense of the school’s high-stakes assessment scores in terms of its unique educational approach. This case study examined a public Montessori elementary school’s efforts as the school implemented the Montessori Method within the accountability era. The research revealed the ways the principal, teachers, and parents on the school council modified Montessori practices, curriculum, and assessment procedures based on test scores. A quality Montessori education is designed to offer children opportunities to develop both cognitive skills and affective components such as student motivation and socio-emotional skills that will serve them beyond their public school experiences. Sadly, the high-stakes testing environment influences so much of public education today. When quality education was measured through only one narrow measure of success the result in this school was clearly a restriction of priorities to areas that were easily assessed. </span></p><p style="margin: 0in 0in 10pt;"> </p>


2020 ◽  
pp. 026553222093783
Author(s):  
Jinnie Shin ◽  
Mark J. Gierl

Automated essay scoring (AES) has emerged as a secondary or as a sole marker for many high-stakes educational assessments, in native and non-native testing, owing to remarkable advances in feature engineering using natural language processing, machine learning, and deep-neural algorithms. The purpose of this study is to compare the effectiveness and the performance of two AES frameworks, each based on machine learning with deep language features, or complex language features, and deep neural algorithms. More specifically, support vector machines (SVMs) in conjunction with Coh-Metrix features were used for a traditional AES model development, and the convolutional neural networks (CNNs) approach was used for more contemporary deep-neural model development. Then, the strengths and weaknesses of the traditional and contemporary models under different circumstances (e.g., types of the rubric, length of the essay, and the essay type) were tested. The results were evaluated using the quadratic weighted kappa (QWK) score and compared with the agreement between the human raters. The results indicated that the CNNs model performs better, meaning that it produced more comparable results to the human raters than the Coh-Metrix + SVMs model. Moreover, the CNNs model also achieved state-of-the-art performance in most of the essay sets with a high average QWK score.


Author(s):  
Jori Hall

This case study uses multiple methods and gathers perspectives from administrators, teachers, and students to examine how a middle school develops internal accountability (Elmore, 2004) to address the needs of its diverse learners and external accountability mandates. Building on Newmann, King, and Rigdon’s (1997) framework for collective capacity, the school’s capacity to enact its internal accountability is explored. An in-depth investigation within the context of the school’s mathematics program, focusing on the academic needs of low-income, African American learners is conducted to further explore collective capacity as primarily enacted vis-à-vis teachers’ instructional strategies. The data presented contribute to a more complex and contextual perspective of teaching and learning within a high-stakes testing environment. The findings of this study show that despite tensions around student accountability and curricular demands, the school successfully incorporates internally-generated accountability and mandated strategies into their internal accountability system and demonstrates leadership capacity at multiple levels.


Sign in / Sign up

Export Citation Format

Share Document