scholarly journals A Dynamic Early Stopping Criterion for Random Search in SVM Hyperparameter Optimization

Author(s):  
Adrian Cătălin Florea ◽  
Răzvan Andonie
2015 ◽  
Vol 51 (1) ◽  
pp. 114-116 ◽  
Author(s):  
C. Marchand ◽  
E. Boutillon

2017 ◽  
Vol 53 (24) ◽  
pp. 1576-1578 ◽  
Author(s):  
Qingshuang Zhang ◽  
Aijun Liu ◽  
Xinhai Tong

2021 ◽  
Vol 7 (2) ◽  
pp. 164-168
Author(s):  
Cuong Le Dinh Phu ◽  
Dong Wang

Diabetes is a chronic disease whereby blood glucose is not metabolized in the body. Electronic health records (EHRs) (Yadav, P. et al., 2018). for each individual or a population have become important to standing developing trends of diseases. Machine learning helps provide accurate predictions higher than actual assessments. The main problem that we are trying to apply machine learning model and using EHRs that combines the strength of a machine learning model with various features and hyperparameter optimization or tuning. The hyperparameter optimization (Feurer, M., 2019) uses the random search optimization which minimizes a predefined loss function on given independent data. The evaluation on the method comparisons indicated that machine learning models has increased the ratio of metrics compared to previous models (Accuracy, Recall, F1 and AUC score) on the same public dataset that is reprocessed.


2020 ◽  
Vol 10 (21) ◽  
pp. 7426
Author(s):  
Jurgita Kapočiūtė-Dzikienė ◽  
Kaspars Balodis ◽  
Raivis Skadiņš

Accurate intent detection-based chatbots are usually trained on larger datasets that are not available for some languages. Seeking the most accurate models, three English benchmark datasets that were human-translated into four morphologically complex languages (i.e., Estonian, Latvian, Lithuanian, Russian) were used. Two types of word embeddings (fastText and BERT), three types of deep neural network (DNN) classifiers (convolutional neural network (CNN); long short-term memory method (LSTM), and bidirectional LSTM (BiLSTM)), different DNN architectures (shallower and deeper), and various DNN hyperparameter values were investigated. DNN architecture and hyperparameter values were optimized automatically using the Bayesian method and random search. On three datasets of 2/5/8 intents for English, Estonian, Latvian, Lithuanian, and Russian languages, accuracies of 0.991/0.890/0.712, 0.972/0.890/0.644, 1.000/0.890/0.644, 0.981/0.872/0.712, and 0.972/0.881/0.661 were achieved, respectively. The BERT multilingual vectorization with the CNN classifier was proven to be a good choice for all datasets for all languages. Moreover, in the majority of models, the same set of optimal hyperparameter values was determined. The results obtained in this research were also compared with the previously reported values (where hyperparameter values of DNN models were selected by an expert). This comparison revealed that automatically optimized models are competitive or even more accurate when created with larger training datasets.


Sign in / Sign up

Export Citation Format

Share Document