Identifying at-risk students based on the phased prediction model

2019 ◽  
Vol 62 (3) ◽  
pp. 987-1003 ◽  
Author(s):  
Yan Chen ◽  
Qinghua Zheng ◽  
Shuguang Ji ◽  
Feng Tian ◽  
Haiping Zhu ◽  
...  
2018 ◽  
Vol 57 (3) ◽  
pp. 547-570 ◽  
Author(s):  
Wanli Xing ◽  
Dongping Du

Massive open online courses (MOOCs) show great potential to transform traditional education through the Internet. However, the high attrition rates in MOOCs have often been cited as a scale-efficacy tradeoff. Traditional educational approaches are usually unable to identify such large-scale number of at-risk students in danger of dropping out in time to support effective intervention design. While building dropout prediction models using learning analytics are promising in informing intervention design for these at-risk students, results of the current prediction model construction methods do not enable personalized intervention for these students. In this study, we take an initial step to optimize the dropout prediction model performance toward intervention personalization for at-risk students in MOOCs. Specifically, based on a temporal prediction mechanism, this study proposes to use the deep learning algorithm to construct the dropout prediction model and further produce the predicted individual student dropout probability. By taking advantage of the power of deep learning, this approach not only constructs more accurate dropout prediction models compared with baseline algorithms but also comes up with an approach to personalize and prioritize intervention for at-risk students in MOOCs through using individual drop out probabilities. The findings from this study and implications are then discussed.


Author(s):  
Mu Lin Wong ◽  
Senthil S.

Academic Performance Prediction models mustn't be accurate only, but timely too, to identify at-risk students at the earliest to provide remedy. Heart rate data of 50 students in 3 main courses are collected, processed, and analyzed to distinguish the difference between excellent students and at-risk students. Three of the 12 heart rate attributes were chosen to calculate the threshold values, which are used to predict at-risk students. Half of the at-risk students were identified after week 5. Later, the datasets were rebalanced. Using four Data Mining classifiers, six attributes were identified to be the best attributes for prediction model development. The datasets were then dimensionally reduced. Applying classification, half of the at-risk students were identified earliest around week 5 of the 12-week semester. J48 is the most robust classifier, compared to JRip, Multi-Level-Perceptron, and RandomForest, making accurate prediction on at-risk students earlier most of the time.


2020 ◽  
Author(s):  
Nitin Puri ◽  
Sydney Graham-Smith ◽  
Michael McCarthy ◽  
Bobby Miller

Abstract Purpose: We have observed that students’ performance in our PreClerkship curriculum does not align well with their USMLE STEP1 scores. Students at-risk of failing or underperforming on STEP1 have often excelled in our institutional assessments. We sought to test the validity and reliability of our course-assessments in predicting STEP1 scores, and in the process generate and validate a more accurate prediction model for STEP1 performance.Methods: Student pre-matriculation and course assessment data of the Class of 2020 (n = 76) is used to generate a stepwise STEP1 prediction model, which is tested with the students of the Class of 2021 (n = 71). Predictions are generated for the end of each course in the programing language R. For the Class of 2021, predicted STEP1 score is correlated with their actual STEP1 scores and data-agreement is tested with means-difference plots. A similar model is generated and tested for the Class of 2022.Results: STEP1 predictions based pre-matriculation data are unreliable and fail to identify at-risk students (R2 = 0.02). STEP1 predictions for most year 1 courses (anatomy, biochemistry, physiology) correlate poorly with students’ actual STEP1 scores (R2 = 0.30). STEP1 predictions improve for year 2 courses (microbiology, pathology and pharmacology), but reliable predictions are based on truly integrated courses with customized NBMEs as comprehensive exams (0.66). Predictions based on these NBMEs and integrated courses are reproducible for the Class of 2022.Conclusion: MCAT and undergraduate GPA are poor predictors of students’ STEP1 scores. Partially integrated courses with biweekly assessments do not promote problem-solving skills and leave students’ at-risk of failing STEP1. Only truly integrated courses with comprehensive assessments are reliable indicators of students STEP1 preparation.


1998 ◽  
Vol 29 (2) ◽  
pp. 109-116 ◽  
Author(s):  
Margie Gilbertson ◽  
Ronald K. Bramlett

The purpose of this study was to investigate informal phonological awareness measures as predictors of first-grade broad reading ability. Subjects were 91 former Head Start students who were administered standardized assessments of cognitive ability and receptive vocabulary, and informal phonological awareness measures during kindergarten and early first grade. Regression analyses indicated that three phonological awareness tasks, Invented Spelling, Categorization, and Blending, were the most predictive of standardized reading measures obtained at the end of first grade. Discriminant analyses indicated that these three phonological awareness tasks correctly identified at-risk students with 92% accuracy. Clinical use of a cutoff score for these measures is suggested, along with general intervention guidelines for practicing clinicians.


Sign in / Sign up

Export Citation Format

Share Document