scholarly journals Hyperparameter Optimization for Deep Learning-based Automatic Melanoma Diagnosis System

2020 ◽  
Vol 9 (0) ◽  
pp. 225-232
Author(s):  
Takashi Nagaoka
2020 ◽  
Vol 7 (3) ◽  
pp. 1994-2004 ◽  
Author(s):  
Bin Xiao ◽  
Yunqiu Xu ◽  
Xiuli Bi ◽  
Weisheng Li ◽  
Zhuo Ma ◽  
...  

2020 ◽  
Vol 9 (0) ◽  
pp. 62-70
Author(s):  
Kana Kato ◽  
Mitsutaka Nemoto ◽  
Yuichi Kimura ◽  
Yoshio Kiyohara ◽  
Hiroshi Koga ◽  
...  

Atmosphere ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 487 ◽  
Author(s):  
Trang Thi Kieu Tran ◽  
Taesam Lee ◽  
Ju-Young Shin ◽  
Jong-Suk Kim ◽  
Mohamad Kamruzzaman

Time series forecasting of meteorological variables such as daily temperature has recently drawn considerable attention from researchers to address the limitations of traditional forecasting models. However, a middle-range (e.g., 5–20 days) forecasting is an extremely challenging task to get reliable forecasting results from a dynamical weather model. Nevertheless, it is challenging to develop and select an accurate time-series prediction model because it involves training various distinct models to find the best among them. In addition, selecting an optimum topology for the selected models is important too. The accurate forecasting of maximum temperature plays a vital role in human life as well as many sectors such as agriculture and industry. The increase in temperature will deteriorate the highland urban heat, especially in summer, and have a significant influence on people’s health. We applied meta-learning principles to optimize the deep learning network structure for hyperparameter optimization. In particular, the genetic algorithm (GA) for meta-learning was used to select the optimum architecture for the network used. The dataset was used to train and test three different models, namely the artificial neural network (ANN), recurrent neural network (RNN), and long short-term memory (LSTM). Our results demonstrate that the hybrid model of an LSTM network and GA outperforms other models for the long lead time forecasting. Specifically, LSTM forecasts have superiority over RNN and ANN for 15-day-ahead in summer with the root mean square error (RMSE) value of 2.719 (°C).


2019 ◽  
Vol 64 (23) ◽  
pp. 235013 ◽  
Author(s):  
Hiroki Tanaka ◽  
Shih-Wei Chiu ◽  
Takanori Watanabe ◽  
Setsuko Kaoku ◽  
Takuhiro Yamaguchi

Sensors ◽  
2020 ◽  
Vol 20 (16) ◽  
pp. 4629 ◽  
Author(s):  
Ciaran Cooney ◽  
Attila Korik ◽  
Raffaella Folli ◽  
Damien Coyle

Classification of electroencephalography (EEG) signals corresponding to imagined speech production is important for the development of a direct-speech brain–computer interface (DS-BCI). Deep learning (DL) has been utilized with great success across several domains. However, it remains an open question whether DL methods provide significant advances over traditional machine learning (ML) approaches for classification of imagined speech. Furthermore, hyperparameter (HP) optimization has been neglected in DL-EEG studies, resulting in the significance of its effects remaining uncertain. In this study, we aim to improve classification of imagined speech EEG by employing DL methods while also statistically evaluating the impact of HP optimization on classifier performance. We trained three distinct convolutional neural networks (CNN) on imagined speech EEG using a nested cross-validation approach to HP optimization. Each of the CNNs evaluated was designed specifically for EEG decoding. An imagined speech EEG dataset consisting of both words and vowels facilitated training on both sets independently. CNN results were compared with three benchmark ML methods: Support Vector Machine, Random Forest and regularized Linear Discriminant Analysis. Intra- and inter-subject methods of HP optimization were tested and the effects of HPs statistically analyzed. Accuracies obtained by the CNNs were significantly greater than the benchmark methods when trained on both datasets (words: 24.97%, p < 1 × 10–7, chance: 16.67%; vowels: 30.00%, p < 1 × 10–7, chance: 20%). The effects of varying HP values, and interactions between HPs and the CNNs were both statistically significant. The results of HP optimization demonstrate how critical it is for training CNNs to decode imagined speech.


Sign in / Sign up

Export Citation Format

Share Document