constructed responses
Recently Published Documents


TOTAL DOCUMENTS

45
(FIVE YEARS 9)

H-INDEX

6
(FIVE YEARS 0)

Author(s):  
José Ángel Martínez-Huertas ◽  
Ricardo Olmos ◽  
Guillermo Jorge-Botana ◽  
José A. León

AbstractIn this paper, we highlight the importance of distilling the computational assessments of constructed responses to validate the indicators/proxies of constructs/trins using an empirical illustration in automated summary evaluation. We present the validation of the Inbuilt Rubric (IR) method that maps rubrics into vector spaces for concepts’ assessment. Specifically, we improved and validated its scores’ performance using latent variables, a common approach in psychometrics. We also validated a new hierarchical vector space, namely a bifactor IR. 205 Spanish undergraduate students produced 615 summaries of three different texts that were evaluated by human raters and different versions of the IR method using latent semantic analysis (LSA). The computational scores were validated using multiple linear regressions and different latent variable models like CFAs or SEMs. Convergent and discriminant validity was found for the IR scores using human rater scores as validity criteria. While this study was conducted in the Spanish language, the proposed scheme is language-independent and applicable to any language. We highlight four main conclusions: (1) Accurate performance can be observed in topic-detection tasks without hundreds/thousands of pre-scored samples required in supervised models. (2) Convergent/discriminant validity can be improved using measurement models for computational scores as they adjust for measurement errors. (3) Nouns embedded in fragments of instructional text can be an affordable alternative to use the IR method. (4) Hierarchical models, like the bifactor IR, can increase the validity of computational assessments evaluating general and specific knowledge in vector space models. R code is provided to apply the classic and bifactor IR method.


2021 ◽  
Vol 54 (107) ◽  
pp. 1061-1088
Author(s):  
Lauren E. Flynn ◽  
Danielle S. McNamara ◽  
Kathryn S. McCarthy ◽  
Joseph P. Magliano ◽  
Laura K. Allen

2019 ◽  
Vol 80 (2) ◽  
pp. 399-414
Author(s):  
Noelle LaVoie ◽  
James Parker ◽  
Peter J. Legree ◽  
Sharon Ardison ◽  
Robert N. Kilcullen

Automated scoring based on Latent Semantic Analysis (LSA) has been successfully used to score essays and constrained short answer responses. Scoring tests that capture open-ended, short answer responses poses some challenges for machine learning approaches. We used LSA techniques to score short answer responses to the Consequences Test, a measure of creativity and divergent thinking that encourages a wide range of potential responses. Analyses demonstrated that the LSA scores were highly correlated with conventional Consequence Test scores, reaching a correlation of .94 with human raters and were moderately correlated with performance criteria. This approach to scoring short answer constructed responses solves many practical problems including the time for humans to rate open-ended responses and the difficulty in achieving reliable scoring.


2016 ◽  
Vol 35 (1) ◽  
pp. 101-120 ◽  
Author(s):  
Zhen Wang ◽  
Klaus Zechner ◽  
Yu Sun

As automated scoring systems for spoken responses are increasingly used in language assessments, testing organizations need to analyze their performance, as compared to human raters, across several dimensions, for example, on individual items or based on subgroups of test takers. In addition, there is a need in testing organizations to establish rigorous procedures for monitoring the performance of both human and automated scoring processes during operational administrations. This paper provides an overview of the automated speech scoring system SpeechRaterSM and how to use charts and evaluation statistics to monitor and evaluate automated scores and human rater scores of spoken constructed responses.


Sign in / Sign up

Export Citation Format

Share Document