bayesian optimization algorithm
Recently Published Documents


TOTAL DOCUMENTS

130
(FIVE YEARS 36)

H-INDEX

16
(FIVE YEARS 2)

2022 ◽  
Vol 12 (1) ◽  
pp. 55
Author(s):  
Fatih Demir ◽  
Kamran Siddique ◽  
Mohammed Alswaitti ◽  
Kursat Demir ◽  
Abdulkadir Sengur

Parkinson’s disease (PD), which is a slowly progressing neurodegenerative disorder, negatively affects people’s daily lives. Early diagnosis is of great importance to minimize the effects of PD. One of the most important symptoms in the early diagnosis of PD disease is the monotony and distortion of speech. Artificial intelligence-based approaches can help specialists and physicians to automatically detect these disorders. In this study, a new and powerful approach based on multi-level feature selection was proposed to detect PD from features containing voice recordings of already-diagnosed cases. At the first level, feature selection was performed with the Chi-square and L1-Norm SVM algorithms (CLS). Then, the features that were extracted from these algorithms were combined to increase the representation power of the samples. At the last level, those samples that were highly distinctive from the combined feature set were selected with feature importance weights using the ReliefF algorithm. In the classification stage, popular classifiers such as KNN, SVM, and DT were used for machine learning, and the best performance was achieved with the KNN classifier. Moreover, the hyperparameters of the KNN classifier were selected with the Bayesian optimization algorithm, and the performance of the proposed approach was further improved. The proposed approach was evaluated using a 10-fold cross-validation technique on a dataset containing PD and normal classes, and a classification accuracy of 95.4% was achieved.


2021 ◽  
Author(s):  
Bo Shen ◽  
Raghav Gnanasambandam ◽  
Rongxuan Wang ◽  
Zhenyu Kong

In many scientific and engineering applications, Bayesian optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. BO guides the choice of experiments in a sequential way to find a good combination of design points in as few experiments as possible. It can be formulated as a problem of optimizing a “black-box” function. Different from single-task Bayesian optimization, Multi-task Bayesian optimization is a general method to efficiently optimize multiple different but correlated “black-box” functions. The previous works in Multi-task Bayesian optimization algorithm queries a point to be evaluated for all tasks in each round of search, which is not efficient. For the case where different tasks are correlated, it is not necessary to evaluate all tasks for a given query point. Therefore, the objective of this work is to develop an algorithm for multi-task Bayesian optimization with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, multi-task Gaussian process upper confidence bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions as well as real-world problems. The results clearly show the advantages of our query strategy for both design point and task.


2021 ◽  
Author(s):  
Bo Shen ◽  
Raghav Gnanasambandam ◽  
Rongxuan Wang ◽  
Zhenyu Kong

In many scientific and engineering applications, Bayesian optimization (BO) is a powerful tool for hyperparameter tuning of a machine learning model, materials design and discovery, etc. BO guides the choice of experiments in a sequential way to find a good combination of design points in as few experiments as possible. It can be formulated as a problem of optimizing a “black-box” function. Different from single-task Bayesian optimization, Multi-task Bayesian optimization is a general method to efficiently optimize multiple different but correlated “black-box” functions. The previous works in Multi-task Bayesian optimization algorithm queries a point to be evaluated for all tasks in each round of search, which is not efficient. For the case where different tasks are correlated, it is not necessary to evaluate all tasks for a given query point. Therefore, the objective of this work is to develop an algorithm for multi-task Bayesian optimization with automatic task selection so that only one task evaluation is needed per query round. Specifically, a new algorithm, namely, multi-task Gaussian process upper confidence bound (MT-GPUCB), is proposed to achieve this objective. The MT-GPUCB is a two-step algorithm, where the first step chooses which query point to evaluate, and the second step automatically selects the most informative task to evaluate. Under the bandit setting, a theoretical analysis is provided to show that our proposed MT-GPUCB is no-regret under some mild conditions. Our proposed algorithm is verified experimentally on a range of synthetic functions as well as real-world problems. The results clearly show the advantages of our query strategy for both design point and task.


2021 ◽  
Vol 12 (1) ◽  
Author(s):  
Ryan Roussel ◽  
Juan Pablo Gonzalez-Aguilera ◽  
Young-Kee Kim ◽  
Eric Wisniewski ◽  
Wanming Liu ◽  
...  

AbstractParticle accelerators are invaluable discovery engines in the chemical, biological and physical sciences. Characterization of the accelerated beam response to accelerator input parameters is often the first step when conducting accelerator-based experiments. Currently used techniques for characterization, such as grid-like parameter sampling scans, become impractical when extended to higher dimensional input spaces, when complicated measurement constraints are present, or prior information known about the beam response is scarce. Here in this work, we describe an adaptation of the popular Bayesian optimization algorithm, which enables a turn-key exploration of input parameter spaces. Our algorithm replaces  the need for parameter scans while minimizing prior information needed about the measurement’s behavior and associated measurement constraints. We experimentally demonstrate that our algorithm autonomously conducts an adaptive, multi-parameter exploration of input parameter space, potentially orders of magnitude faster than conventional grid-like parameter scans, while making highly constrained, single-shot beam phase-space measurements and accounts for costs associated with changing input parameters. In addition to applications in accelerator-based scientific experiments, this algorithm addresses challenges shared by many scientific disciplines, and is thus applicable to autonomously conducting experiments over a broad range of research topics.


2021 ◽  
Vol 231 ◽  
pp. 111453
Author(s):  
Qianjin Lin ◽  
Chun Zou ◽  
Shibo Liu ◽  
Yunpeng Wang ◽  
Lixin Lu ◽  
...  

2021 ◽  
Author(s):  
Haijun Bai ◽  
Guanjun Li ◽  
Changming Liu ◽  
Bin Li ◽  
Zhendong Zhang ◽  
...  

Abstract Obtaining accurate runoff prediction results and quantifying the uncertainty of the forecasting are critical to the planning and management of water resources. However, the strong randomness of runoff makes it difficult to predict. In this study, a hybrid model based on XGBoost (XGB) and Gaussian process regression (GPR) with Bayesian optimization algorithm (BOA) is proposed for runoff probabilistic forecasting. XGB is first used to obtain point prediction results, which can guarantee the accuracy of forecast. Then, GPR is constructed to obtain runoff probability prediction results. To make the model show better performance, the hyper-parameters of the model are optimized by BOA. Finally, the proposed hybrid model XGB-GPR-BOA is applied to four runoff prediction cases in the Yangtze River Basin, China and compared with eight state-of-the-art runoff prediction methods from three aspects: point prediction accuracy, interval prediction suitability and probability prediction comprehensive performance. The experimental results show that the proposed model can obtain high-precision point prediction, appropriate prediction interval and reliable probabilistic prediction results on the runoff prediction problems.


2021 ◽  
Author(s):  
Felix Berkenkamp ◽  
Andreas Krause ◽  
Angela P. Schoellig

AbstractSelecting the right tuning parameters for algorithms is a pravelent problem in machine learning that can significantly affect the performance of algorithms. Data-efficient optimization algorithms, such as Bayesian optimization, have been used to automate this process. During experiments on real-world systems such as robotic platforms these methods can evaluate unsafe parameters that lead to safety-critical system failures and can destroy the system. Recently, a safe Bayesian optimization algorithm, called SafeOpt, has been developed, which guarantees that the performance of the system never falls below a critical value; that is, safety is defined based on the performance function. However, coupling performance and safety is often not desirable in practice, since they are often opposing objectives. In this paper, we present a generalized algorithm that allows for multiple safety constraints separate from the objective. Given an initial set of safe parameters, the algorithm maximizes performance but only evaluates parameters that satisfy safety for all constraints with high probability. To this end, it carefully explores the parameter space by exploiting regularity assumptions in terms of a Gaussian process prior. Moreover, we show how context variables can be used to safely transfer knowledge to new situations and tasks. We provide a theoretical analysis and demonstrate that the proposed algorithm enables fast, automatic, and safe optimization of tuning parameters in experiments on a quadrotor vehicle.


Sign in / Sign up

Export Citation Format

Share Document