scholarly journals Machine Learning Approaches for Early Prediction of Gestational Diabetes Mellitus Based on Prospective Cohort Study

Author(s):  
Jingyuan Wang ◽  
Xiujuan Chen ◽  
Yueshuai Pan ◽  
Kai Chen ◽  
Yan Zhang ◽  
...  

Abstract Purpose: To develop and verify an early prediction model of gestational diabetes mellitus (GDM) using machine learning algorithm.Methods: The dataset collected from a pregnant cohort study in eastern China, from 2017 to 2019. It was randomly divided into 75% as the training dataset and 25% as the test dataset using the train_test_split function. Based on Python, four classic machine learning algorithm and a New-Stacking algorithm were first trained by the training dataset, and then verified by the test dataset. The four models were Logical Regression (LR), Random Forest (RT), Artificial Neural Network (ANN) and Support Vector Machine (SVM). The sensitivity, specificity, accuracy, and area under the Receiver Operating Characteristic Curve (AUC) were used to analyse the performance of models.Results: Valid information from a total of 2811 pregnant women were obtained. The accuracies of the models ranged from 80.09% to 86.91% (RF), sensitivities ranged from 63.30% to 81.65% (SVM), specificities ranged from 79.38% to 97.53% (RF), and AUCs ranged from 0.80 to 0.82 (New-Stacking).Conclusion: This paper successfully constructed a New-Stacking model theoretically, for its better performance in specificity, accuracy and AUC. But the SVM model got the highest sensitivity, the SVM model was recommends as the prediction model for clinical.

2021 ◽  
Vol 11 (9) ◽  
pp. 3866
Author(s):  
Jun-Ryeol Park ◽  
Hye-Jin Lee ◽  
Keun-Hyeok Yang ◽  
Jung-Keun Kook ◽  
Sanghee Kim

This study aims to predict the compressive strength of concrete using a machine-learning algorithm with linear regression analysis and to evaluate its accuracy. The open-source software library TensorFlow was used to develop the machine-learning algorithm. In the machine-earning algorithm, a total of seven variables were set: water, cement, fly ash, blast furnace slag, sand, coarse aggregate, and coarse aggregate size. A total of 4297 concrete mixtures with measured compressive strengths were employed to train and testing the machine-learning algorithm. Of these, 70% were used for training, and 30% were utilized for verification. For verification, the research was conducted by classifying the mixtures into three cases: the case where the machine-learning algorithm was trained using all the data (Case-1), the case where the machine-learning algorithm was trained while maintaining the same number of training dataset for each strength range (Case-2), and the case where the machine-learning algorithm was trained after making the subcase of each strength range (Case-3). The results indicated that the error percentages of Case-1 and Case-2 did not differ significantly. The error percentage of Case-3 was far smaller than those of Case-1 and Case-2. Therefore, it was concluded that the range of training dataset of the concrete compressive strength is as important as the amount of training dataset for accurately predicting the concrete compressive strength using the machine-learning algorithm.


2022 ◽  
Vol 12 (1) ◽  
pp. 114
Author(s):  
Chao Lu ◽  
Jiayin Song ◽  
Hui Li ◽  
Wenxing Yu ◽  
Yangquan Hao ◽  
...  

Osteoarthritis (OA) is the most common joint disease associated with pain and disability. OA patients are at a high risk for venous thrombosis (VTE). Here, we developed an interpretable machine learning (ML)-based model to predict VTE risk in patients with OA. To establish a prediction model, we used six ML algorithms, of which 35 variables were employed. Recursive feature elimination (RFE) was used to screen the most related clinical variables associated with VTE. SHapley additive exPlanations (SHAP) were applied to interpret the ML mode and determine the importance of the selected features. Overall, 3169 patients with OA (average age: 66.52 ± 7.28 years) were recruited from Xi’an Honghui Hospital. Of these, 352 and 2817 patients were diagnosed with and without VTE, respectively. The XGBoost algorithm showed the best performance. According to the RFE algorithms, 15 variables were retained for further modeling with the XGBoost algorithm. The top three predictors were Kellgren–Lawrence grade, age, and hypertension. Our study showed that the XGBoost model with 15 variables has a high potential to predict VTE risk in patients with OA.


2021 ◽  
Vol 233 (5) ◽  
pp. e191
Author(s):  
Zain I. Khalpey ◽  
Amina Khalpey ◽  
Bhavisha Modi ◽  
Jessa L. Deckwa

2021 ◽  
Vol 8 ◽  
Author(s):  
Xueyuan Huang ◽  
Yongjun Wang ◽  
Bingyu Chen ◽  
Yuanshuai Huang ◽  
Xinhua Wang ◽  
...  

Background: Predicting the perioperative requirement for red blood cells (RBCs) transfusion in patients with the pelvic fracture may be challenging. In this study, we constructed a perioperative RBCs transfusion predictive model (ternary classifications) based on a machine learning algorithm.Materials and Methods: This study included perioperative adult patients with pelvic trauma hospitalized across six Chinese centers between September 2012 and June 2019. An extreme gradient boosting (XGBoost) algorithm was used to predict the need for perioperative RBCs transfusion, with data being split into training test (80%), which was subjected to 5-fold cross-validation, and test set (20%). The ability of the predictive transfusion model was compared with blood preparation based on surgeons' experience and other predictive models, including random forest, gradient boosting decision tree, K-nearest neighbor, logistic regression, and Gaussian naïve Bayes classifier models. Data of 33 patients from one of the hospitals were prospectively collected for model validation.Results: Among 510 patients, 192 (37.65%) have not received any perioperative RBCs transfusion, 127 (24.90%) received less-transfusion (RBCs < 4U), and 191 (37.45%) received more-transfusion (RBCs ≥ 4U). Machine learning-based transfusion predictive model produced the best performance with the accuracy of 83.34%, and Kappa coefficient of 0.7967 compared with other methods (blood preparation based on surgeons' experience with the accuracy of 65.94%, and Kappa coefficient of 0.5704; the random forest method with an accuracy of 82.35%, and Kappa coefficient of 0.7858; the gradient boosting decision tree with an accuracy of 79.41%, and Kappa coefficient of 0.7742; the K-nearest neighbor with an accuracy of 53.92%, and Kappa coefficient of 0.3341). In the prospective dataset, it also had a food performance with accuracy 81.82%.Conclusion: This multicenter retrospective cohort study described the construction of an accurate model that could predict perioperative RBCs transfusion in patients with pelvic fractures.


2021 ◽  
Vol 8 (3) ◽  
pp. 209-221
Author(s):  
Li-Li Wei ◽  
Yue-Shuai Pan ◽  
Yan Zhang ◽  
Kai Chen ◽  
Hao-Yu Wang ◽  
...  

Abstract Objective To study the application of a machine learning algorithm for predicting gestational diabetes mellitus (GDM) in early pregnancy. Methods This study identified indicators related to GDM through a literature review and expert discussion. Pregnant women who had attended medical institutions for an antenatal examination from November 2017 to August 2018 were selected for analysis, and the collected indicators were retrospectively analyzed. Based on Python, the indicators were classified and modeled using a random forest regression algorithm, and the performance of the prediction model was analyzed. Results We obtained 4806 analyzable data from 1625 pregnant women. Among these, 3265 samples with all 67 indicators were used to establish data set F1; 4806 samples with 38 identical indicators were used to establish data set F2. Each of F1 and F2 was used for training the random forest algorithm. The overall predictive accuracy of the F1 model was 93.10%, area under the receiver operating characteristic curve (AUC) was 0.66, and the predictive accuracy of GDM-positive cases was 37.10%. The corresponding values for the F2 model were 88.70%, 0.87, and 79.44%. The results thus showed that the F2 prediction model performed better than the F1 model. To explore the impact of sacrificial indicators on GDM prediction, the F3 data set was established using 3265 samples (F1) with 38 indicators (F2). After training, the overall predictive accuracy of the F3 model was 91.60%, AUC was 0.58, and the predictive accuracy of positive cases was 15.85%. Conclusions In this study, a model for predicting GDM with several input variables (e.g., physical examination, past history, personal history, family history, and laboratory indicators) was established using a random forest regression algorithm. The trained prediction model exhibited a good performance and is valuable as a reference for predicting GDM in women at an early stage of pregnancy. In addition, there are certain requirements for the proportions of negative and positive cases in sample data sets when the random forest algorithm is applied to the early prediction of GDM.


Water ◽  
2021 ◽  
Vol 13 (9) ◽  
pp. 1179
Author(s):  
Xiaodong Tang ◽  
Mutao Huang

Machine learning algorithm, as an important method for numerical modeling, has been widely used for chlorophyll-a concentration inversion modeling. In this work, a variety of models were built by applying five kinds of datasets and adopting back propagation neural network (BPNN), extreme learning machine (ELM), support vector machine (SVM). The results revealed that modeling with multi-factor datasets has the possibility to improve the accuracy of inversion model, and seven band combinations are better than seven single bands when modeling, Besides, SVM is more suitable than BPNN and ELM for chlorophyll-a concentration inversion modeling of Donghu Lake. The SVM model based on seven three-band combination dataset (SVM3) is the best inversion one among all multi-factor models that the mean relative error (MRE), mean absolute error (MAE), root mean square error (RMSE) of the SVM model based on single-factor dataset (SF-SVM) are 30.82%, 9.44 μg/L and 12.66 μg/L, respectively. SF-SVM performs best in single-factor models, MRE, MAE, RMSE of SF-SVM are 28.63%, 13.69 μg/L and 16.49 μg/L, respectively. In addition, the simulation effect of SVM3 is better than that of SF-SVM. On the whole, an effective model for retrieving chlorophyll-a concentration has been built based on machine learning algorithm, and our work provides a reliable basis and promotion for exploring accurate and applicable chlorophyll-a inversion model.


Sign in / Sign up

Export Citation Format

Share Document