expensive optimization
Recently Published Documents


TOTAL DOCUMENTS

101
(FIVE YEARS 43)

H-INDEX

13
(FIVE YEARS 2)

2021 ◽  
Author(s):  
Yoel Tenne

RBF metamodels, which are commonly used in expensive optimization problems, rely on a hyperparameter which affects their prediction. The optimal hyperparameter value is typically unknown and hence needs to be estimated by additional procedures. As such this study examines if this overhead is justified from an overall search effectiveness perspective, namely, if changes in the hyperparameter yield significant performance differences. Analysis based on extensive numerical experiments shows that changes are significant in functions with low to moderate multimodality but are less significant in functions with highly multimodality.


2021 ◽  
Vol 17 (10) ◽  
pp. e1009472
Author(s):  
Stefan T. Radev ◽  
Frederik Graw ◽  
Simiao Chen ◽  
Nico T. Mutters ◽  
Vanessa M. Eichel ◽  
...  

Mathematical models in epidemiology are an indispensable tool to determine the dynamics and important characteristics of infectious diseases. Apart from their scientific merit, these models are often used to inform political decisions and interventional measures during an ongoing outbreak. However, reliably inferring the epidemical dynamics by connecting complex models to real data is still hard and requires either laborious manual parameter fitting or expensive optimization methods which have to be repeated from scratch for every application of a given model. In this work, we address this problem with a novel combination of epidemiological modeling with specialized neural networks. Our approach entails two computational phases: In an initial training phase, a mathematical model describing the epidemic is used as a coach for a neural network, which acquires global knowledge about the full range of possible disease dynamics. In the subsequent inference phase, the trained neural network processes the observed data of an actual outbreak and infers the parameters of the model in order to realistically reproduce the observed dynamics and reliably predict future progression. With its flexible framework, our simulation-based approach is applicable to a variety of epidemiological models. Moreover, since our method is fully Bayesian, it is designed to incorporate all available prior knowledge about plausible parameter values and returns complete joint posterior distributions over these parameters. Application of our method to the early Covid-19 outbreak phase in Germany demonstrates that we are able to obtain reliable probabilistic estimates for important disease characteristics, such as generation time, fraction of undetected infections, likelihood of transmission before symptom onset, and reporting delays using a very moderate amount of real-world observations.


2021 ◽  
Author(s):  
Muhammad Furqan Afzal ◽  
Christian David Márton ◽  
Erin L. Rich ◽  
Kanaka Rajan

Neuroscience has seen a dramatic increase in the types of recording modalities and complexity of neural time-series data collected from them. The brain is a highly recurrent system producing rich, complex dynamics that result in different behaviors. Correctly distinguishing such nonlinear neural time series in real-time, especially those with non-obvious links to behavior, could be useful for a wide variety of applications. These include detecting anomalous clinical events such as seizures in epilepsy, and identifying optimal control spaces for brain machine interfaces. It remains challenging to correctly distinguish nonlinear time-series patterns because of the high intrinsic dimensionality of such data, making accurate inference of state changes (for intervention or control) difficult. Simple distance metrics, which can be computed quickly do not yield accurate classifications. On the other end of the spectrum of classification methods, ensembles of classifiers or deep supervised tools offer higher accuracy but are slow, data-intensive, and computationally expensive. We introduce a reservoir-based tool, state tracker (TRAKR), which offers the high accuracy of ensembles or deep supervised methods while preserving the computational benefits of simple distance metrics. After one-shot training, TRAKR can accurately, and in real time, detect deviations in test patterns. By forcing the weighted dynamics of the reservoir to fit a desired pattern directly, we avoid many rounds of expensive optimization. Then, keeping the output weights frozen, we use the error signal generated by the reservoir in response to a particular test pattern as a classification boundary. We show that, using this approach, TRAKR accurately detects changes in synthetic time series. We then compare our tool to several others, showing that it achieves highest classification performance on a benchmark dataset, sequential MNIST, even when corrupted by noise. Additionally, we apply TRAKR to electrocorticography (ECoG) data from the macaque orbitofrontal cortex (OFC), a higher-order brain region involved in encoding the value of expected outcomes. We show that TRAKR can classify different behaviorally relevant epochs in the neural time series more accurately and efficiently than conventional approaches. Therefore, TRAKR can be used as a fast and accurate tool to distinguish patterns in complex nonlinear time-series data, such as neural recordings.


2021 ◽  
pp. 147592172110372
Author(s):  
Liang Chen ◽  
Adrien Gallet ◽  
Shan-Shan Huang ◽  
Dong Liu ◽  
Danny Smyl

In recent years, electrical tomography, namely, electrical resistance tomography (ERT), has emerged as a viable approach to detecting, localizing and reconstructing structural cracking patterns in concrete structures. High-fidelity ERT reconstructions, however, often require computationally expensive optimization regimes and complex constraining and regularization schemes, which impedes pragmatic implementation in Structural Health Monitoring frameworks. To address this challenge, this article proposes the use of predictive deep neural networks to directly and rapidly solve an analogous ERT inverse problem. Specifically, the use of cross-entropy loss is used in optimizing networks forming a nonlinear mapping from ERT voltage measurements to binary probabilistic spatial crack distributions (cracked/not cracked). In this effort, artificial neural networks and convolutional neural networks are first trained using simulated electrical data. Following, the feasibility of the predictive networks is tested and affirmed using experimental and simulated data considering flexural and shear cracking patterns observed from reinforced concrete elements.


Author(s):  
Xiaodong Ren ◽  
Daofu Guo ◽  
Zhigang Ren ◽  
Yongsheng Liang ◽  
An Chen

AbstractBy remarkably reducing real fitness evaluations, surrogate-assisted evolutionary algorithms (SAEAs), especially hierarchical SAEAs, have been shown to be effective in solving computationally expensive optimization problems. The success of hierarchical SAEAs mainly profits from the potential benefit of their global surrogate models known as “blessing of uncertainty” and the high accuracy of local models. However, their performance leaves room for improvement on high-dimensional problems since now it is still challenging to build accurate enough local models due to the huge solution space. Directing against this issue, this study proposes a new hierarchical SAEA by training local surrogate models with the help of the random projection technique. Instead of executing training in the original high-dimensional solution space, the new algorithm first randomly projects training samples onto a set of low-dimensional subspaces, then trains a surrogate model in each subspace, and finally achieves evaluations of candidate solutions by averaging the resulting models. Experimental results on seven benchmark functions of 100 and 200 dimensions demonstrate that random projection can significantly improve the accuracy of local surrogate models and the new proposed hierarchical SAEA possesses an obvious edge over state-of-the-art SAEAs.


Author(s):  
Zhi-Hui Zhan ◽  
Lin Shi ◽  
Kay Chen Tan ◽  
Jun Zhang

AbstractComplex continuous optimization problems widely exist nowadays due to the fast development of the economy and society. Moreover, the technologies like Internet of things, cloud computing, and big data also make optimization problems with more challenges including Many-dimensions, Many-changes, Many-optima, Many-constraints, and Many-costs. We term these as 5-M challenges that exist in large-scale optimization problems, dynamic optimization problems, multi-modal optimization problems, multi-objective optimization problems, many-objective optimization problems, constrained optimization problems, and expensive optimization problems in practical applications. The evolutionary computation (EC) algorithms are a kind of promising global optimization tools that have not only been widely applied for solving traditional optimization problems, but also have emerged booming research for solving the above-mentioned complex continuous optimization problems in recent years. In order to show how EC algorithms are promising and efficient in dealing with the 5-M complex challenges, this paper presents a comprehensive survey by proposing a novel taxonomy according to the function of the approaches, including reducing problem difficulty, increasing algorithm diversity, accelerating convergence speed, reducing running time, and extending application field. Moreover, some future research directions on using EC algorithms to solve complex continuous optimization problems are proposed and discussed. We believe that such a survey can draw attention, raise discussions, and inspire new ideas of EC research into complex continuous optimization problems and real-world applications.


2021 ◽  
Author(s):  
Takumi Sonoda ◽  
Masaya Nakata

Surrogate-assisted multi-objective evolutionary algorithms have advanced the field of computationally expensive optimization, but their progress is often restricted to low-dimensional problems. This manuscript presents a multiple classifiers-assisted evolutionary algorithm based on decomposition, which is adapted for high-dimensional expensive problems in terms of the following two insights. Compared to approximation-based surrogates, the accuracy of classification-based surrogates is robust for few high-dimensional training samples. Further, multiple local classifiers can hedge the risk of over-fitting issues. Accordingly, the proposed algorithm builds multiple classifiers with support vector machines on a decomposition-based multi-objective algorithm, wherein each local classifier is trained for a corresponding scalarization function. Experimental results statistically confirm that the proposed algorithm is competitive to the state-of-the-art algorithms and computationally efficient as well.


Sign in / Sign up

Export Citation Format

Share Document