Machine learning models for prediction of postoperative ileus in patients underwent laparoscopic colorectal surgery

2020 ◽  
Author(s):  
Xueyan Li ◽  
Genshan Ma ◽  
Xiaobo Qian ◽  
Yamou Wu ◽  
Xiaochen Huang ◽  
...  

Abstract Background: We aimed to assess the performance of machine learning algorithms for the prediction of risk factors of postoperative ileus (POI) in patients underwent laparoscopic colorectal surgery for malignant lesions. Methods: We conducted analyses in a retrospective observational study with a total of 637 patients at Suzhou Hospital of Nanjing Medical University. Four machine learning algorithms (logistic regression, decision tree, random forest, gradient boosting decision tree) were considered to predict risk factors of POI. The total cases were randomly divided into training and testing data sets, with a ratio of 8:2. The performance of each model was evaluated by area under receiver operator characteristic curve (AUC), precision, recall and F1-score. Results: The morbidity of POI in this study was 19.15% (122/637). Gradient boosting decision tree reached the highest AUC (0.76) and was the best model for POI risk prediction. In addition, the results of the importance matrix of gradient boosting decision tree showed that the five most important variables were time to first passage of flatus, opioids during POD3, duration of surgery, height and weight. Conclusions: The gradient boosting decision tree was the optimal model to predict the risk of POI in patients underwent laparoscopic colorectal surgery for malignant lesions. And the results of our study could be useful for clinical guidelines in POI risk prediction.

2019 ◽  
Author(s):  
Xueyan Li ◽  
Genshan Ma ◽  
Xiaobo Qian ◽  
Yamou Wu ◽  
Xiaochen Huang ◽  
...  

Abstract Background Machine learning may predict postoperative intestinal obstruction (POI) in patients underwent laparoscopic colorectal surgery for malignant lesions.Methods We used five machine learning algorithms (Logistic regression, Decision Tree, Forest, Gradient Boosting and gbm), analyzed by 28 explanatory variables, to predict POI. The total samples were randomly divided into training and testing groups, with a ratio of 8:2. The model was evaluated by the area operation characteristic curve (AUC), F1-Measure, accuracy, recall, and MSE under the receiver.Results A total of 637 patients were enrolled in this study, 122 (19.15%) of them had POI. Gradient Boosting and gbm had the most accurate in training group and testing group respectively.The f1_score of Gradient Boosting was the highest in the training group (f1_score =0.710526), and the f1_score of gbm was the highest in the testing group (f1_score =0.500000). In addition, the results of the importance matrix of Gbdt algorithm model showed that the important variables that account for the weight of intestinal obstruction after the first five operations are time to pass flatus or passage of stool, cumulative dose of rescue opioids used in postoperative days 3 (POD 3), duration of surgery, height and weight.Conclusions Machine learning algorithms may predict the occurrence of POI in patients underwent laparoscopic colorectal surgery for malignant lesions, especially Gradient Boosting and GBM algorithms. Moreover, time to pass flatus or passage of stool, cumulative dose of rescue opioids used during POD 3, duration of surgery, height and weight play an important role in the development of POI.


2019 ◽  
Author(s):  
Lei Zhang ◽  
Xianwen Shang ◽  
Subhashaan Sreedharan ◽  
Xixi Yan ◽  
Jianbin Liu ◽  
...  

BACKGROUND Previous conventional models for the prediction of diabetes could be updated by incorporating the increasing amount of health data available and new risk prediction methodology. OBJECTIVE We aimed to develop a substantially improved diabetes risk prediction model using sophisticated machine-learning algorithms based on a large retrospective population cohort of over 230,000 people who were enrolled in the study during 2006-2017. METHODS We collected demographic, medical, behavioral, and incidence data for type 2 diabetes mellitus (T2DM) in over 236,684 diabetes-free participants recruited from the 45 and Up Study. We predicted and compared the risk of diabetes onset in these participants at 3, 5, 7, and 10 years based on three machine-learning approaches and the conventional regression model. RESULTS Overall, 6.05% (14,313/236,684) of the participants developed T2DM during an average 8.8-year follow-up period. The 10-year diabetes incidence in men was 8.30% (8.08%-8.49%), which was significantly higher (odds ratio 1.37, 95% CI 1.32-1.41) than that in women at 6.20% (6.00%-6.40%). The incidence of T2DM was doubled in individuals with obesity (men: 17.78% [17.05%-18.43%]; women: 14.59% [13.99%-15.17%]) compared with that of nonobese individuals. The gradient boosting machine model showed the best performance among the four models (area under the curve of 79% in 3-year prediction and 75% in 10-year prediction). All machine-learning models predicted BMI as the most significant factor contributing to diabetes onset, which explained 12%-50% of the variance in the prediction of diabetes. The model predicted that if BMI in obese and overweight participants could be hypothetically reduced to a healthy range, the 10-year probability of diabetes onset would be significantly reduced from 8.3% to 2.8% (<i>P</i>&lt;.001). CONCLUSIONS A one-time self-reported survey can accurately predict the risk of diabetes using a machine-learning approach. Achieving a healthy BMI can significantly reduce the risk of developing T2DM.


2021 ◽  
Vol 9 ◽  
Author(s):  
Huanhuan Zhao ◽  
Xiaoyu Zhang ◽  
Yang Xu ◽  
Lisheng Gao ◽  
Zuchang Ma ◽  
...  

Hypertension is a widespread chronic disease. Risk prediction of hypertension is an intervention that contributes to the early prevention and management of hypertension. The implementation of such intervention requires an effective and easy-to-implement hypertension risk prediction model. This study evaluated and compared the performance of four machine learning algorithms on predicting the risk of hypertension based on easy-to-collect risk factors. A dataset of 29,700 samples collected through a physical examination was used for model training and testing. Firstly, we identified easy-to-collect risk factors of hypertension, through univariate logistic regression analysis. Then, based on the selected features, 10-fold cross-validation was utilized to optimize four models, random forest (RF), CatBoost, MLP neural network and logistic regression (LR), to find the best hyper-parameters on the training set. Finally, the performance of models was evaluated by AUC, accuracy, sensitivity and specificity on the test set. The experimental results showed that the RF model outperformed the other three models, and achieved an AUC of 0.92, an accuracy of 0.82, a sensitivity of 0.83 and a specificity of 0.81. In addition, Body Mass Index (BMI), age, family history and waist circumference (WC) are the four primary risk factors of hypertension. These findings reveal that it is feasible to use machine learning algorithms, especially RF, to predict hypertension risk without clinical or genetic data. The technique can provide a non-invasive and economical way for the prevention and management of hypertension in a large population.


2019 ◽  
Vol 12 (1) ◽  
Author(s):  
Daichi Shigemizu ◽  
Shintaro Akiyama ◽  
Yuya Asanomi ◽  
Keith A. Boroevich ◽  
Alok Sharma ◽  
...  

Abstract Background Dementia with Lewy bodies (DLB) is the second most common subtype of neurodegenerative dementia in humans following Alzheimer’s disease (AD). Present clinical diagnosis of DLB has high specificity and low sensitivity and finding potential biomarkers of prodromal DLB is still challenging. MicroRNAs (miRNAs) have recently received a lot of attention as a source of novel biomarkers. Methods In this study, using serum miRNA expression of 478 Japanese individuals, we investigated potential miRNA biomarkers and constructed an optimal risk prediction model based on several machine learning methods: penalized regression, random forest, support vector machine, and gradient boosting decision tree. Results The final risk prediction model, constructed via a gradient boosting decision tree using 180 miRNAs and two clinical features, achieved an accuracy of 0.829 on an independent test set. We further predicted candidate target genes from the miRNAs. Gene set enrichment analysis of the miRNA target genes revealed 6 functional genes included in the DHA signaling pathway associated with DLB pathology. Two of them were further supported by gene-based association studies using a large number of single nucleotide polymorphism markers (BCL2L1: P = 0.012, PIK3R2: P = 0.021). Conclusions Our proposed prediction model provides an effective tool for DLB classification. Also, a gene-based association test of rare variants revealed that BCL2L1 and PIK3R2 were statistically significantly associated with DLB.


2020 ◽  
Vol 2020 ◽  
pp. 1-12
Author(s):  
Mingyue Xue ◽  
Yinxia Su ◽  
Chen Li ◽  
Shuxia Wang ◽  
Hua Yao

Background. An estimated 425 million people globally have diabetes, accounting for 12% of the world’s health expenditures, and the number continues to grow, placing a huge burden on the healthcare system, especially in those remote, underserved areas. Methods. A total of 584,168 adult subjects who have participated in the national physical examination were enrolled in this study. The risk factors for type II diabetes mellitus (T2DM) were identified by p values and odds ratio, using logistic regression (LR) based on variables of physical measurement and a questionnaire. Combined with the risk factors selected by LR, we used a decision tree, a random forest, AdaBoost with a decision tree (AdaBoost), and an extreme gradient boosting decision tree (XGBoost) to identify individuals with T2DM, compared the performance of the four machine learning classifiers, and used the best-performing classifier to output the degree of variables’ importance scores of T2DM. Results. The results indicated that XGBoost had the best performance (accuracy=0.906, precision=0.910, recall=0.902, F‐1=0.906, and AUC=0.968). The degree of variables’ importance scores in XGBoost showed that BMI was the most significant feature, followed by age, waist circumference, systolic pressure, ethnicity, smoking amount, fatty liver, hypertension, physical activity, drinking status, dietary ratio (meat to vegetables), drink amount, smoking status, and diet habit (oil loving). Conclusions. We proposed a classifier based on LR-XGBoost which used fourteen variables of patients which are easily obtained and noninvasive as predictor variables to identify potential incidents of T2DM. The classifier can accurately screen the risk of diabetes in the early phrase, and the degree of variables’ importance scores gives a clue to prevent diabetes occurrence.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 6928
Author(s):  
Łukasz Wojtecki ◽  
Sebastian Iwaszenko ◽  
Derek B. Apel ◽  
Tomasz Cichy

Rockburst is a dynamic rock mass failure occurring during underground mining under unfavorable stress conditions. The rockburst phenomenon concerns openings in different rocks and is generally correlated with high stress in the rock mass. As a result of rockburst, underground excavations lose their functionality, the infrastructure is damaged, and the working conditions become unsafe. Assessing rockburst hazards in underground excavations becomes particularly important with the increasing mining depth and the mining-induced stresses. Nowadays, rockburst risk prediction is based mainly on various indicators. However, some attempts have been made to apply machine learning algorithms for this purpose. For this article, we employed an extensive range of machine learning algorithms, e.g., an artificial neural network, decision tree, random forest, and gradient boosting, to estimate the rockburst risk in galleries in one of the deep hard coal mines in the Upper Silesian Coal Basin, Poland. With the use of these algorithms, we proposed rockburst risk prediction models. Neural network and decision tree models were most effective in assessing whether a rockburst occurred in an analyzed case, taking into account the average value of the recall parameter. In three randomly selected datasets, the artificial neural network models were able to identify all of the rockbursts.


10.2196/16850 ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. e16850 ◽  
Author(s):  
Lei Zhang ◽  
Xianwen Shang ◽  
Subhashaan Sreedharan ◽  
Xixi Yan ◽  
Jianbin Liu ◽  
...  

Background Previous conventional models for the prediction of diabetes could be updated by incorporating the increasing amount of health data available and new risk prediction methodology. Objective We aimed to develop a substantially improved diabetes risk prediction model using sophisticated machine-learning algorithms based on a large retrospective population cohort of over 230,000 people who were enrolled in the study during 2006-2017. Methods We collected demographic, medical, behavioral, and incidence data for type 2 diabetes mellitus (T2DM) in over 236,684 diabetes-free participants recruited from the 45 and Up Study. We predicted and compared the risk of diabetes onset in these participants at 3, 5, 7, and 10 years based on three machine-learning approaches and the conventional regression model. Results Overall, 6.05% (14,313/236,684) of the participants developed T2DM during an average 8.8-year follow-up period. The 10-year diabetes incidence in men was 8.30% (8.08%-8.49%), which was significantly higher (odds ratio 1.37, 95% CI 1.32-1.41) than that in women at 6.20% (6.00%-6.40%). The incidence of T2DM was doubled in individuals with obesity (men: 17.78% [17.05%-18.43%]; women: 14.59% [13.99%-15.17%]) compared with that of nonobese individuals. The gradient boosting machine model showed the best performance among the four models (area under the curve of 79% in 3-year prediction and 75% in 10-year prediction). All machine-learning models predicted BMI as the most significant factor contributing to diabetes onset, which explained 12%-50% of the variance in the prediction of diabetes. The model predicted that if BMI in obese and overweight participants could be hypothetically reduced to a healthy range, the 10-year probability of diabetes onset would be significantly reduced from 8.3% to 2.8% (P<.001). Conclusions A one-time self-reported survey can accurately predict the risk of diabetes using a machine-learning approach. Achieving a healthy BMI can significantly reduce the risk of developing T2DM.


2013 ◽  
Vol 144 (5) ◽  
pp. S-1113
Author(s):  
Udo Kronberg ◽  
Vivian Parada ◽  
Alejandro J. Zarate ◽  
Magdalena Castro ◽  
Valentina Salvador ◽  
...  

2021 ◽  
Vol 2021 ◽  
pp. 1-9
Author(s):  
Qiang Zhao

Over the last two decades, the identification of ancient artifacts has been regarded as one of the most challenging tasks for archaeologists. Chinese people consider these artifacts as symbols of their cultural heritage. The development of technology has helped in the identification of ancient artifacts to a greater extent. The study preferred machine-learning algorithms to identify the ancient artifacts found throughout China. The major cities of China were selected for the study and classified the cities based on different features like temple, modern city, harbour, battle, and South China. The study used a decision tree algorithm for recognition and gradient boosting for perception aspects. According to the findings of the study, the algorithms produced 98% accuracy and prediction in detecting ancient artifacts in China. The proposed models provide a good indicator for detecting archaeological site locations.


Author(s):  
Zulqarnain Khokhar ◽  
◽  
Murtaza Ahmed Siddiqi ◽  

Wi-Fi based indoor positioning with the help of access points and smart devices have become an integral part in finding a device or a person’s location. Wi-Fi based indoor localization technology has been among the most attractive field for researchers for a number of years. In this paper, we have presented Wi-Fi based in-door localization using three different machine-learning techniques. The three machine learning algorithms implemented and compared are Decision Tree, Random Forest and Gradient Boosting classifier. After making a fingerprint of the floor based on Wi-Fi signals, mentioned algorithms were used to identify device location at thirty different positions on the floor. Random Forest and Gradient Boosting classifier were able to identify the location of the device with accuracy higher than 90%. While Decision Tree was able to identify the location with accuracy a bit higher than 80%.


Sign in / Sign up

Export Citation Format

Share Document