Grid search in hyperparameter optimization of machine learning models for prediction of HIV/AIDS test results

Author(s):  
Daniel Mesafint Belete ◽  
Manjaiah D. Huchaiah
Minerals ◽  
2021 ◽  
Vol 11 (2) ◽  
pp. 159
Author(s):  
Nan Lin ◽  
Yongliang Chen ◽  
Haiqi Liu ◽  
Hanlin Liu

Selecting internal hyperparameters, which can be set by the automatic search algorithm, is important to improve the generalization performance of machine learning models. In this study, the geological, remote sensing and geochemical data of the Lalingzaohuo area in Qinghai province were researched. A multi-source metallogenic information spatial data set was constructed by calculating the Youden index for selecting potential evidence layers. The model for mapping mineral prospectivity of the study area was established by combining two swarm intelligence optimization algorithms, namely the bat algorithm (BA) and the firefly algorithm (FA), with different machine learning models. The receiver operating characteristic (ROC) and prediction-area (P-A) curves were used for performance evaluation and showed that the two algorithms had an obvious optimization effect. The BA and FA differentiated in improving multilayer perceptron (MLP), AdaBoost and one-class support vector machine (OCSVM) models; thus, there was no optimization algorithm that was consistently superior to the other. However, the accuracy of the machine learning models was significantly enhanced after optimizing the hyperparameters. The area under curve (AUC) values of the ROC curve of the optimized machine learning models were all higher than 0.8, indicating that the hyperparameter optimization calculation was effective. In terms of individual model improvement, the accuracy of the FA-AdaBoost model was improved the most significantly, with the AUC value increasing from 0.8173 to 0.9597 and the prediction/area (P/A) value increasing from 3.156 to 10.765, where the mineral targets predicted by the model occupied 8.63% of the study area and contained 92.86% of the known mineral deposits. The targets predicted by the improved machine learning models are consistent with the metallogenic geological characteristics, indicating that the swarm intelligence optimization algorithm combined with the machine learning model is an efficient method for mineral prospectivity mapping.


2021 ◽  
Vol 2 (28) ◽  
pp. 44-51
Author(s):  
B. S. Ermakov ◽  

The article investigates the influence of artificial neural network’s structure on the results, with example of multlayer perceptron for forecasting some of the financial indicators. Multiple tests were made with various networks structures: different numbers of hidden layers and different numbers of neurons in these layers. Based on tests results, the increase of network’s size is effective to a certain extent, but at some point the further size increase is unreasonable. Also, the test results demonstrate that overfitting problem for multilayer perceptron is not as crucial as for the other machine learning models, such as regression. Key words: artificial neural networks, forecasting, multlayer perceptron, overfitting, artificial neural netwok’s size.


2022 ◽  
Vol 8 ◽  
Author(s):  
Boshen Yang ◽  
Sixuan Xu ◽  
Di Wang ◽  
Yu Chen ◽  
Zhenfa Zhou ◽  
...  

Background: Hypertension is a rather common comorbidity among critically ill patients and hospital mortality might be higher among critically ill patients with hypertension (SBP ≥ 140 mmHg and/or DBP ≥ 90 mmHg). This study aimed to explore the association between ACEI/ARB medication during ICU stay and all-cause in-hospital mortality in these patients.Methods: A retrospective cohort study was conducted based on data from Medical Information Mart for Intensive Care IV (MIMIC-IV) database, which consisted of more than 40,000 patients in ICU between 2008 and 2019 at Beth Israel Deaconess Medical Center. Adults diagnosed with hypertension on admission and those had high blood pressure (SBP ≥ 140 mmHg and/or DBP ≥ 90 mmHg) during ICU stay were included. The primary outcome was all-cause in-hospital mortality. Patients were divided into ACEI/ARB treated and non-treated group during ICU stay. Propensity score matching (PSM) was used to adjust potential confounders. Nine machine learning models were developed and validated based on 37 clinical and laboratory features of all patients. The model with the best performance was selected based on area under the receiver operating characteristic curve (AUC) followed by 5-fold cross-validation. After hyperparameter optimization using Grid and random hyperparameter search, a final LightGBM model was developed, and Shapley Additive exPlanations (SHAP) values were calculated to evaluate feature importance of each feature. The features closely associated with hospital mortality were presented as significant features.Results: A total of 15,352 patients were enrolled in this study, among whom 5,193 (33.8%) patients were treated with ACEI/ARB. A significantly lower all-cause in-hospital mortality was observed among patients treated with ACEI/ARB (3.9 vs. 12.7%) as well as a lower 28-day mortality (3.6 vs. 12.2%). The outcome remained consistent after propensity score matching. Among nine machine learning models, the LightGBM model had the highest AUC = 0.9935. The SHAP plot was employed to make the model interpretable based on LightGBM model after hyperparameter optimization, showing that ACEI/ARB use was among the top five significant features, which were associated with hospital mortality.Conclusions: The use of ACEI/ARB in critically ill patients with hypertension during ICU stay is related to lower all-cause in-hospital mortality, which was independently associated with increased survival in a large and heterogeneous cohort of critically ill hypertensive patients with or without kidney dysfunction.


2019 ◽  
Author(s):  
Hidetaka Tamune ◽  
Jumpei Ukita ◽  
Yu Hamamoto ◽  
Hiroko Tanaka ◽  
Kenji Narushima ◽  
...  

AbstractBackgroundVitamin B deficiency is common worldwide and may lead to psychiatric symptoms; however, vitamin B deficiency epidemiology in patients with intense psychiatric episode has rarely been examined. Moreover, vitamin deficiency testing is costly and time-consuming, which has hampered effectively ruling out vitamin deficiency-induced intense psychiatric symptoms. In this study, we aimed to clarify the epidemiology of these deficiencies and efficiently predict them using machine-learning models from patient characteristics and routine blood test results that can be obtained within one hour.MethodsWe reviewed 497 consecutive patients deemed to be at imminent risk of seriously harming themselves or others over 2 years in a single psychiatric tertiary-care center. Machine-learning models (k-nearest neighbors, logistic regression, support vector machine, and random forest) were trained to predict each deficiency from age, sex, and 29 routine blood test results gathered in the period from September 2015 to December 2016. The models were validated using a dataset collected from January 2017 through August 2017.ResultsWe found that 112 (22.5%), 80 (16.1%), and 72 (14.5%) patients had vitamin B1, vitamin B12, and folate (vitamin B9) deficiency, respectively. Further, the machine-learning models were well generalized to predict deficiency in the future unseen data, especially using random forest; areas under the receiver operating characteristic curves for the validation dataset (i.e. the dataset not used for training the models) were 0.716, 0.599, and 0.796, respectively. The Gini importance of these vitamins provided further evidence of a relationship between these vitamins and the complete blood count, while also indicating a hitherto rarely considered, potential association between these vitamins and alkaline phosphatase (ALP) or thyroid stimulating hormone (TSH).DiscussionThis study demonstrates that machine-learning can efficiently predict some vitamin deficiencies in patients with active psychiatric symptoms, based on the largest cohort to date with intense psychiatric episode. The prediction method may expedite risk stratification and clinical decision-making regarding whether replacement therapy should be prescribed. Further research includes validating its external generalizability in other clinical situations and clarify whether interventions based on this method could improve patient care and cost-effectiveness.


2021 ◽  
Vol 1 (1) ◽  
Author(s):  
Sanvitha Kasthuriarachchi ◽  
S. R. Liyanage

A combination of different machine learning models to form a super learner can definitely lead to improved predictions in any domain. The super learner ensemble discussed in this study collates several machine learning models and proposes to enhance the performance by considering the final meta- model accuracy and the prediction duration. An algorithm is proposed to rate the machine learning models derived by combining the base classifiers voted with different weights. The proposed algorithm is named as Log Loss Weighted Super Learner Model (LLWSL). Based on the voted weight, the optimal model is selected and the machine learning method derived is identified. The meta- learner of the super learner uses them by tuning their hyperparameters. The execution time and the model accuracies were evaluated using two separate datasets inside LMSSLIITD extracted from the educational industry by executing the LLWSL algorithm. According to the outcome of the evaluation process, it has been noticed that there exists a significant improvement in the proposed algorithm LLWSL for use in machine learning tasks for the achievement of better performances.


2019 ◽  
Vol 26 (1) ◽  
pp. 41-51 ◽  
Author(s):  
Wei Tu ◽  
Patricia A. Chen ◽  
Noshin Koenig ◽  
Daniela Gomez ◽  
Esther Fujiwara ◽  
...  

2020 ◽  
Author(s):  
Rana Muhammad Adnan Ikram ◽  
Zhongmin Liang ◽  
Ozgur Kisi ◽  
Muhammad Adnan ◽  
Binquan Li ◽  
...  

<p>River runoff prediction plays a very vital role in water resources planning, hydropower designing and agricultural water management. In the current study, the prediction capability of three machine learning models, least square support vector regression (LSSVR), fuzzy genetic (FG) and M5 model tree (M5Tree), in modeling daily and monthly runoffs of Hunza River catchment (HRC) using own and nearby Gilgit climatic station data is examined. The prediction performances of three machine learning models are compared using three statistical indexes, namely, root mean square error (RMSE), mean absolute error (MAE) and coefficient of determination (R<sup>2</sup>). Firstly, four previous time lagged values of runoff, rainfall and atmospheric temperature are used as inputs on basis of correlation analysis to validate and test the accuracy of three machine learning models. After analyzing the performance of various input combinations, optimal one is selected for each variable and then these optimal inputs are employed together to see the forecasting performance. In the first part of study, monthly runoff of HRC are predicted using inputs consisting of local previous monthly runoff values and monthly meteorological values of Gilgit station. The test results show that LSSVR provides more accurate prediction results than the other two machine learning models. In the second part, daily runoffs of HRC are predicted using own previous daily runoff and Gilgit station’s climatic values. In the test results, a better accuracy is obtained from LSSVR models in relative to the FG and M5Tree models. In the last part of study, daily runoffs of HRC are predicted using own runoff and climatic data of HRC. In the results, it is found that local climatic data slightly improved the all model’s prediction accuracy in comparison of other scenario which also uses nearby station’s climatic data. The LSSVR models again are found to be better than the FGA and M5Tree models. LSSVM generally performs superior to the FGA and M5Tree in forecasting daily stream flow of Hunza River using local stream flow and climatic inputs. Based on the results of study, LSSVR model is recommended for monthly and daily runoff prediction of HRC with or without local climatic data.</p>


2020 ◽  
Vol 2 (1) ◽  
pp. 3-6
Author(s):  
Eric Holloway

Imagination Sampling is the usage of a person as an oracle for generating or improving machine learning models. Previous work demonstrated a general system for using Imagination Sampling for obtaining multibox models. Here, the possibility of importing such models as the starting point for further automatic enhancement is explored.


Sign in / Sign up

Export Citation Format

Share Document