An Acute Kidney Injury Prediction Model Based on Ensemble Learning Algorithm

Author(s):  
Yuan Wang ◽  
Yake Wei ◽  
Qin Wu ◽  
Hao Yang ◽  
Jingwei Li
2016 ◽  
Vol 67 (14) ◽  
pp. 1715-1722 ◽  
Author(s):  
Taku Inohara ◽  
Shun Kohsaka ◽  
Hiroaki Miyata ◽  
Ikuko Ueda ◽  
Yuichiro Maekawa ◽  
...  

2018 ◽  
Vol 46 (7) ◽  
pp. 1070-1077 ◽  
Author(s):  
Jay L. Koyner ◽  
Kyle A. Carey ◽  
Dana P. Edelson ◽  
Matthew M. Churpek

2021 ◽  
Author(s):  
Yu Tang ◽  
Qi Dai ◽  
Mengyuan Yang ◽  
Lifang Chen

Abstract For the traditional ensemble learning algorithm of software defect prediction, the base predictor exists the problem that too many parameters are difficult to optimize, resulting in the optimized performance of the model unable to be obtained. An ensemble learning algorithm for software defect prediction that is proposed by using the improved sparrow search algorithm to optimize the extreme learning machine, which divided into three parts. Firstly, the improved sparrow search algorithm (ISSA) is proposed to improve the optimization ability and convergence speed, and the performance of the improved sparrow search algorithm is tested by using eight benchmark test functions. Secondly, ISSA is used to optimize extreme learning machine (ISSA-ELM) to improve the prediction ability. Finally, the optimized ensemble learning algorithm (ISSA-ELM-Bagging) is presented in the Bagging algorithm which improve the prediction performance of ELM in software defect datasets. Experiments are carried out in six groups of software defect datasets. The experimental results show that ISSA-ELM-Bagging ensemble learning algorithm is significantly better than the other four comparison algorithms under the six evaluation indexes of Precision, Recall, F-measure, MCC, Accuracy and G-mean, which has better stability and generalization ability.


2013 ◽  
Vol 22 (04) ◽  
pp. 1350025 ◽  
Author(s):  
BYUNGWOO LEE ◽  
SUNGHA CHOI ◽  
BYONGHWA OH ◽  
JIHOON YANG ◽  
SUNGYONG PARK

We present a new ensemble learning method that employs a set of regional classifiers, each of which learns to handle a subset of the training data. We split the training data and generate classifiers for different regions in the feature space. When classifying an instance, we apply a weighted voting scheme among the classifiers that include the instance in their region. We used 11 datasets to compare the performance of our new ensemble method with that of single classifiers as well as other ensemble methods such as RBE, bagging and Adaboost. As a result, we found that the performance of our method is comparable to that of Adaboost and bagging when the base learner is C4.5. In the remaining cases, our method outperformed other approaches.


Sign in / Sign up

Export Citation Format

Share Document