scholarly journals How Flexible Is Your Data? A Comparative Analysis of Scoring Methodologies across Learning Platforms in the Context of Group Differentiation

2017 ◽  
Vol 4 (2) ◽  
Author(s):  
Korinn S. Ostrow ◽  
Yan Wang ◽  
Neil T. Heffernan

Data is flexible in that it is molded by not only the features and variables available to a researcher for analysis and interpretation, but also by how those features and variables are recorded and processed prior to evaluation. “Big Data” from online learning platforms and intelligent tutoring systems is no different. The work presented herein questions the quality and flexibility of data from two popular learning platforms, comparing binary measures of problem-level accuracy, the scoring method typically used to inform learner analytics, with partial credit scoring, a more robust, real-world methodology. This work extends previous research by examining how the manipulation of scoring methodology has the potential to alter outcomes when testing hypotheses, or specifically, when looking for significant differences between groups of students. Datasets from ASSISTments and Cognitive Tutor are used to assess the implications of data availability and manipulation within twelve mathematics skills. A resampling approach is used to determine the size of equivalent samples of high- and low-performing students required to reliably differentiate performance when considering each scoring methodology. Results suggest that in eleven out of twelve observed skills, partial credit offers more efficient group differentiation, increasing analytic power and reducing Type II error. Alternative applications of this approach and implications for the Learning Analytics community are discussed.

Author(s):  
Anna L. Rowe ◽  
Nancy J. Cooke

Part of the success of computerized intelligent tutoring systems will be associated with their ability to assess and diagnose students' knowledge in order to direct pedagogical interventions. What is needed is a methodology for identifying general relationships between on-line action patterns and patterns of knowledge derived off-line. Such a methodology would allow an assessment and diagnosis of knowledge, based only on student actions. The focus of this initial research is the development of a means of identifying meaningful action patterns in student-tutor interactions. Actions executed by subjects on a set of verbal troubleshooting tests (Nichols et al., 1989) were summarized using the Pathfinder network scaling procedure (Schvaneveldt, 1990). The results obtained from this work indicate that meaningful patterns of actions can be identified using the Pathfinder procedure. The network patterns are meaningful in the sense that they can differentiate high and low performers as defined by a previous scoring method. In addition, the networks reveal differences between high and low performers suggestive of targets for intervention.


Author(s):  
Vincent Aleven ◽  
Jonathan Sewall ◽  
Octav Popescu ◽  
Michael Ringenberg ◽  
Martin van Velsen ◽  
...  

2000 ◽  
Author(s):  
Christine Mitchel ◽  
Alan Chappell ◽  
W. Gray ◽  
Alex Quinn ◽  
David Thurman

Author(s):  
Ekaterina Kochmar ◽  
Dung Do Vu ◽  
Robert Belfer ◽  
Varun Gupta ◽  
Iulian Vlad Serban ◽  
...  

AbstractIntelligent tutoring systems (ITS) have been shown to be highly effective at promoting learning as compared to other computer-based instructional approaches. However, many ITS rely heavily on expert design and hand-crafted rules. This makes them difficult to build and transfer across domains and limits their potential efficacy. In this paper, we investigate how feedback in a large-scale ITS can be automatically generated in a data-driven way, and more specifically how personalization of feedback can lead to improvements in student performance outcomes. First, in this paper we propose a machine learning approach to generate personalized feedback in an automated way, which takes individual needs of students into account, while alleviating the need of expert intervention and design of hand-crafted rules. We leverage state-of-the-art machine learning and natural language processing techniques to provide students with personalized feedback using hints and Wikipedia-based explanations. Second, we demonstrate that personalized feedback leads to improved success rates at solving exercises in practice: our personalized feedback model is used in , a large-scale dialogue-based ITS with around 20,000 students launched in 2019. We present the results of experiments with students and show that the automated, data-driven, personalized feedback leads to a significant overall improvement of 22.95% in student performance outcomes and substantial improvements in the subjective evaluation of the feedback.


Sign in / Sign up

Export Citation Format

Share Document