at risk students
Recently Published Documents


TOTAL DOCUMENTS

784
(FIVE YEARS 192)

H-INDEX

32
(FIVE YEARS 4)

2022 ◽  
Author(s):  
Cameron I. Cooper

Abstract Nationally, more than one-third of students enrolling in introductory computer science programming courses (CS101) do not succeed. To improve student success rates, this research team used supervised machine learning to identify students who are “at-risk” of not succeeding in CS101 at a two-year public college. The resultant predictive model accurately identifies \(\approx\)99% of “at-risk” students in an out-of-sample test data set. The programming instructor piloted the use of the model’s predictive factors as early alert triggers to intervene with individualized outreach and support across three course sections of CS101 in fall 2020. The outcome of this pilot study was a 23% increase in student success and a 7.3 percentage point decrease in the DFW rate. More importantly, this study identified academic, early alert triggers for CS101. Specifically, the first two graded programs are of paramount importance for student success in the course.


2022 ◽  
Vol 6 (1) ◽  
pp. 6
Author(s):  
Gomathy Ramaswami ◽  
Teo Susnjak ◽  
Anuradha Mathrani

Poor academic performance of students is a concern in the educational sector, especially if it leads to students being unable to meet minimum course requirements. However, with timely prediction of students’ performance, educators can detect at-risk students, thereby enabling early interventions for supporting these students in overcoming their learning difficulties. However, the majority of studies have taken the approach of developing individual models that target a single course while developing prediction models. These models are tailored to specific attributes of each course amongst a very diverse set of possibilities. While this approach can yield accurate models in some instances, this strategy is associated with limitations. In many cases, overfitting can take place when course data is small or when new courses are devised. Additionally, maintaining a large suite of models per course is a significant overhead. This issue can be tackled by developing a generic and course-agnostic predictive model that captures more abstract patterns and is able to operate across all courses, irrespective of their differences. This study demonstrates how a generic predictive model can be developed that identifies at-risk students across a wide variety of courses. Experiments were conducted using a range of algorithms, with the generic model producing an effective accuracy. The findings showed that the CatBoost algorithm performed the best on our dataset across the F-measure, ROC (receiver operating characteristic) curve and AUC scores; therefore, it is an excellent candidate algorithm for providing solutions on this domain given its capabilities to seamlessly handle categorical and missing data, which is frequently a feature in educational datasets.


Learning data analytics improves the learning field in higher education using educational data for extracting useful patterns and making better decision. Identifying potential at-risk students may help instructors and academic guidance to improve the students’ performance and the achievement of learning outcomes. The aim of this research study is to predict at early phases the student’s failure in a particular course using the standards-based grading. Several machines learning techniques were implemented to predict the student failure based on Support Vector Machine, Multilayer Perceptron, Naïve Bayes, and decision tree. The results on each technique shows the ability of machine learning algorithms to predict the student failure accurately after the third week and before the course dropout week. This study provides a strong knowledge for student performance in all courses. It also provides faculty members the ability to help student at-risk by focusing on them and providing necessary support to improve their performance and avoid failure.


2021 ◽  
Vol 16 (24) ◽  
pp. 255-272
Author(s):  
Edmund Evangelista

Virtual Learning Environments (VLE), such as Moodle and Blackboard, store vast data to help identify students' performance and engagement. As a result, researchers have been focusing their efforts on assisting educational institutions in providing machine learning models to predict at-risk students and improve their performance. However, it requires an efficient approach to construct a model that can ultimately provide accurate predictions. Consequently, this study proposes a hybrid machine learning framework to predict students' performance using eight classification algorithms and three ensemble methods (Bagging, Boosting, Voting) to determine the best-performing predictive model. In addition, this study used filter-based and wrapper-based feature selection techniques to select the best features of the dataset related to students' performance. The obtained results reveal that the ensemble methods recorded higher predictive accuracy when compared to single classifiers. Furthermore, the accuracy of the models improved due to the feature selection techniques utilized in this study.


Author(s):  
Rineke Keijzer ◽  
Roeland van der Rijst ◽  
Erik van Schooten ◽  
Wilfried Admiraal

Abstract Background Mentors guide students in their challenges at school and in life. At-risk students in last-resort programs who are at a high risk of leaving school unqualified are especially in need of highly competent and adaptive mentors. This study therefore aimed to identify mentor qualities as perceived by at-risk students and their mentors that meet students’ needs and mentors’ capabilities. Methods Face-to-face individual semi-structured interviews were conducted with students and mentors of two specialized programs in the Netherlands. Sensitizing concepts, derived from literature, were used to identify themes. Data analysis was conducted using thematic analyses and was validated by performing an audit. Results The mentor qualities that at-risk students and their mentors reported were classified in three different themes. Mentor tasks consisted of guiding and motivating students and providing them with tangible methods of support. Relationships between mentor and student were based on levels of respect, equality, and bonding. Characteristics of mentors related to empathy, care, and trust. Research implications Emotional responsiveness deserves further exploration as it appears to be an underlying concept of being a good mentor. Future research might explore mentor qualities in the context of other last-resort programs for at-risk students. Practical implications Findings implicate that mentors have to walk a tightrope between keeping professional distance and being sensitive, suggesting constant attention to their professional development is needed. Originality In the context of last-resort programs, an alternative perspective on mentoring at-risk students is outlined, based on perceptions of both students and mentors.


2021 ◽  
Vol 48 (6) ◽  
pp. 720-728
Author(s):  
Wenting Weng ◽  
Nicola L. Ritter ◽  
Karen Cornell ◽  
Molly Gonzales

Over the past decade, the field of education has seen stark changes in the way that data are collected and leveraged to support high-stakes decision-making. Utilizing big data as a meaningful lens to inform teaching and learning can increase academic success. Data-driven research has been conducted to understand student learning performance, such as predicting at-risk students at an early stage and recommending tailored interventions to support services. However, few studies in veterinary education have adopted Learning Analytics. This article examines the adoption of Learning Analytics by using the retrospective data from the first-year professional Doctor of Veterinary Medicine program. The article gives detailed examples of predicting six courses from week 0 (i.e., before the classes started) to week 14 in the semester of Spring 2018. The weekly models for each course showed the change of prediction results as well as the comparison between the prediction results and students’ actual performance. From the prediction models, at-risk students were successfully identified at the early stage, which would help inform instructors to pay more attention to them at this point.


Author(s):  
Ahmed Bagabir ◽  
◽  
Mohammad Zaino ◽  
Ahmed Abutaleb ◽  
Ahmed Fagehi ◽  
...  

It is suggested that this study contributes by establishing a robust methodology for analyzing the longitudinal outcomes of higher education. The current research uses multinomial logistic regression. To the knowledge of the authors, this is the first logistic regression analysis performed at Saudi higher education institutions. The study can help decision-makers take action to improve the academic performance of at-risk students. The analyses are based on enrollment and completion data of 5,203 undergraduate students in the colleges of engineering and medicine. The observation period was extended for ten academic years from 2010 to 2020. Four outcomes were identified for students: (i) degree completion on time, (ii) degree completion with delay, (iii) dropout, and (iv) still enrolled in programs. The objectives are twofold: (i) to study the present situation by measuring graduation and retention rates with benchmarking, and (ii) to determine the effect of twelve continuous and dummy predictors (covariates) on outcomes. The present results show that the pre-admission covariates slightly affect performance in higher education programs. The results indicate that the most important indicator of graduation is the student's achievement in the first year of the program. Finally, it is highly suggested that initiatives be taken to increase graduation and retention rates and to review the admissions policy currently in place.


2021 ◽  
Vol 5 (4) ◽  
pp. 71
Author(s):  
Balqis Albreiki ◽  
Tetiana Habuza ◽  
Zaid Shuqfa ◽  
Mohamed Adel Serhani ◽  
Nazar Zaki ◽  
...  

Detecting at-risk students provides advanced benefits for reducing student retention rates, effective enrollment management, alumni engagement, targeted marketing improvement, and institutional effectiveness advancement. One of the success factors of educational institutes is based on accurate and timely identification and prioritization of the students requiring assistance. The main objective of this paper is to detect at-risk students as early as possible in order to take appropriate correction measures taking into consideration the most important and influential attributes in students’ data. This paper emphasizes the use of a customized rule-based system (RBS) to identify and visualize at-risk students in early stages throughout the course delivery using the Risk Flag (RF). Moreover, it can serve as a warning tool for instructors to identify those students that may struggle to grasp learning outcomes. The module allows the instructor to have a dashboard that graphically depicts the students’ performance in different coursework components. The at-risk student will be distinguished (flagged), and remedial actions will be communicated to the student, instructor, and stakeholders. The system suggests remedial actions based on the severity of the case and the time the student is flagged. It is expected to improve students’ achievement and success, and it could also have positive impacts on under-performing students, educators, and academic institutions in general.


2021 ◽  
Author(s):  
Cameron I. Cooper ◽  
Kamea J. Cooper ◽  
Cameron Collyer

Abstract Nationally, more than one-third of students enrolling in introductory computer science programming courses (CS101) do not succeed. To improve student success rates, this research team used supervised machine learning to identify students who are “at-risk” of not succeeding in CS101 at a two-year public college. The resultant predictive model accurately identifies \(\approx\)99% of “at-risk” students in an out-of-sample test data set. The programming instructor piloted the use of the model’s predictive factors as early alert triggers to intervene with individualized outreach and support across three course sections of CS101 in fall 2020. The outcome of this pilot study was a 23% increase in student success and a 7.3 percentage point decrease in the DFW rate. More importantly, this study identified academic, early alert triggers for CS101. Specifically, the first two graded programs are of paramount importance for student success in the course.


Sign in / Sign up

Export Citation Format

Share Document