global optimisation
Recently Published Documents


TOTAL DOCUMENTS

287
(FIVE YEARS 53)

H-INDEX

30
(FIVE YEARS 3)

Nano Energy ◽  
2021 ◽  
pp. 106684
Author(s):  
Daniil Yurchenko ◽  
Lucas Queiroz Machado ◽  
Junlei Wang ◽  
Chris Bowen ◽  
Suleiman Sharkh ◽  
...  

Mathematics ◽  
2021 ◽  
Vol 9 (23) ◽  
pp. 3043
Author(s):  
Manuel L. Esquível ◽  
Nadezhda P. Krasii ◽  
Pedro P. Mota ◽  
Nélio Machado

We propose a stochastic algorithm for global optimisation of a regular function, possibly unbounded, defined on a bounded set with regular boundary; a function that attains its extremum in the boundary of its domain of definition. The algorithm is determined by a diffusion process that is associated with the function by means of a strictly elliptic operator that ensures an adequate maximum principle. In order to preclude the algorithm to be trapped in a local extremum, we add a pure random search step to the algorithm. We show that an adequate procedure of parallelisation of the algorithm can increase the rate of convergence, thus superseding the main drawback of the addition of the pure random search step.


2021 ◽  
Vol 236 ◽  
pp. 109560
Author(s):  
Kai Yu ◽  
Xiao-feng Liang ◽  
Ming-zhi Li ◽  
Zhe Chen ◽  
Yan-long Yao ◽  
...  

Vibration ◽  
2021 ◽  
Vol 4 (3) ◽  
pp. 648-665
Author(s):  
Sina Safari ◽  
Julián Londoño Monsalve

Characterisation and quantification of nonlinearities in the engineering structures include selecting and fitting a good mathematical model to a set of experimental vibration data with significant nonlinear features. These tasks involve solving an optimisation problem where it is difficult to choose a priori the best optimisation technique. This paper presents a systematic comparison of ten optimisation methods used to select the best nonlinear model and estimate its parameters through nonlinear system identification. The model selection framework fits the structure’s equation of motions using time-domain dynamic response data and takes into account couplings due to the presence of the nonlinearities. Three benchmark problems are used to evaluate the performance of two families of optimisation methods: (i) deterministic local searches and (ii) global optimisation metaheuristics. Furthermore, hybrid local–global optimisation methods are examined. All benchmark problems include a free play nonlinearity commonly found in mechanical structures. Multiple performance criteria are considered based on computational efficiency and robustness, that is, finding the best nonlinear model. Results show that hybrid methods, that is, the multi-start strategy with local gradient-based Levenberg–Marquardt method and the particle swarm with Levenberg–Marquardt method, lead to a successful selection of nonlinear models and an accurate estimation of their parameters within acceptable computational times.


2021 ◽  
Vol 2021 (5) ◽  
Author(s):  
Csaba Balázs ◽  
◽  
Melissa van Beekveld ◽  
Sascha Caron ◽  
Barry M. Dillon ◽  
...  

Abstract Optimisation problems are ubiquitous in particle and astrophysics, and involve locating the optimum of a complicated function of many parameters that may be computationally expensive to evaluate. We describe a number of global optimisation algorithms that are not yet widely used in particle astrophysics, benchmark them against random sampling and existing techniques, and perform a detailed comparison of their performance on a range of test functions. These include four analytic test functions of varying dimensionality, and a realistic example derived from a recent global fit of weak-scale supersymmetry. Although the best algorithm to use depends on the function being investigated, we are able to present general conclusions about the relative merits of random sampling, Differential Evolution, Particle Swarm Optimisation, the Covariance Matrix Adaptation Evolution Strategy, Bayesian Optimisation, Grey Wolf Optimisation, and the PyGMO Artificial Bee Colony, Gaussian Particle Filter and Adaptive Memory Programming for Global Optimisation algorithms.


2021 ◽  
Vol 1 (1) ◽  
pp. 1-38
Author(s):  
Dogan Corus ◽  
Andrei Lissovoi ◽  
Pietro S. Oliveto ◽  
Carsten Witt

We analyse the impact of the selective pressure for the global optimisation capabilities of steady-state evolutionary algorithms (EAs). For the standard bimodal benchmark function TwoMax , we rigorously prove that using uniform parent selection leads to exponential runtimes with high probability to locate both optima for the standard ( +1) EA and ( +1) RLS with any polynomial population sizes. However, we prove that selecting the worst individual as parent leads to efficient global optimisation with overwhelming probability for reasonable population sizes. Since always selecting the worst individual may have detrimental effects for escaping from local optima, we consider the performance of stochastic parent selection operators with low selective pressure for a function class called TruncatedTwoMax, where one slope is shorter than the other. An experimental analysis shows that the EAs equipped with inverse tournament selection, where the loser is selected for reproduction and small tournament sizes, globally optimise TwoMax efficiently and effectively escape from local optima of TruncatedTwoMax with high probability. Thus, they identify both optima efficiently while uniform (or stronger) selection fails in theory and in practice. We then show the power of inverse selection on function classes from the literature where populations are essential by providing rigorous proofs or experimental evidence that it outperforms uniform selection equipped with or without a restart strategy. We conclude the article by confirming our theoretical insights with an empirical analysis of the different selective pressures on standard benchmarks of the classical MaxSat and multidimensional knapsack problems.


Sign in / Sign up

Export Citation Format

Share Document