scholarly journals A hybrid heuristic parallel method of global optimization

Author(s):  
К.В. Пушкарев ◽  
В.Д. Кошур

Рассматривается задача нахождения глобального минимума непрерывной целевой функции многих переменных в области, имеющей вид многомерного параллелепипеда. Для решения сложных задач глобальной оптимизации предлагается гибридный эвристический параллельный метод глобальной оптимизации (ГЭПМ), основанный на комбинировании и гибридизации различных методов и технологии многоагентной системы. В состав ГЭПМ включены как новые методы (например, метод нейросетевой аппроксимации инверсных зависимостей, использующий обобщeнно-регрессионные нейронные сети (GRNN), отображающие значения целевой функции в значения координат), так и модифицированные классические методы (например, модифицированный метод Хука-Дживса). Кратко описывается программная реализация ГЭПМ в форме кроссплатформенной (на уровне исходного кода) программной библиотеки на языке C++, использующей обмен сообщениями через интерфейс MPI (Message Passing Interface). Приводятся результаты сравнения ГЭПМ с 21 современным методом глобальной оптимизации и генетическим алгоритмом на 28 тестовых целевых функциях 50 переменных. The problem of finding the global minimum of a continuous objective function of multiple variables in a multidimensional parallelepiped is considered. A hybrid heuristic parallel method for solving of complicated global optimization problems is proposed. The method is based on combining various methods and on the multi-agent technology. It consists of new methods (for example, the method of neural network approximation of inverse coordinate mappings that uses Generalized Regression Neural Networks (GRNN) to map the values of an objective function to coordinates) and modified classical methods (for example, the modified Hooke-Jeeves method). An implementation of the proposed method as a cross-platform (on the source code level) library written in the C++ language is briefly discussed. This implementation uses the message passing via MPI (Message Passing Interface). The method is compared with 21 modern methods of global optimization and with a genetic algorithm using 28 test objective functions of 50 variables.

2020 ◽  
Author(s):  
Alberto Bemporad ◽  
Dario Piga

AbstractThis paper proposes a method for solving optimization problems in which the decision-maker cannot evaluate the objective function, but rather can only express a preference such as “this is better than that” between two candidate decision vectors. The algorithm described in this paper aims at reaching the global optimizer by iteratively proposing the decision maker a new comparison to make, based on actively learning a surrogate of the latent (unknown and perhaps unquantifiable) objective function from past sampled decision vectors and pairwise preferences. A radial-basis function surrogate is fit via linear or quadratic programming, satisfying if possible the preferences expressed by the decision maker on existing samples. The surrogate is used to propose a new sample of the decision vector for comparison with the current best candidate based on two possible criteria: minimize a combination of the surrogate and an inverse weighting distance function to balance between exploitation of the surrogate and exploration of the decision space, or maximize a function related to the probability that the new candidate will be preferred. Compared to active preference learning based on Bayesian optimization, we show that our approach is competitive in that, within the same number of comparisons, it usually approaches the global optimum more closely and is computationally lighter. Applications of the proposed algorithm to solve a set of benchmark global optimization problems, for multi-objective optimization, and for optimal tuning of a cost-sensitive neural network classifier for object recognition from images are described in the paper. MATLAB and a Python implementations of the algorithms described in the paper are available at http://cse.lab.imtlucca.it/~bemporad/glis.


2012 ◽  
Vol 532-533 ◽  
pp. 1115-1119
Author(s):  
Xiao Mei Guo ◽  
Wei Zhao ◽  
Li Hong Zhang ◽  
Wen Hua Yu

This paper introduces a parallel FDTD (Finite Difference Time Domain) algorithm based on MPI (Message Passing Interface) parallel environment and NUMA (Non-Uniform Memory Access) architecture workstation. The FDTD computation is carried out independently in local meshes in each process. The data are exchanged by communication between adjacent subdomains to achieve the FDTD parallel method. The results show the consistency between serial and parallel algorithms, and the computing efficiency is improved effectively.


2014 ◽  
Vol 2014 ◽  
pp. 1-11 ◽  
Author(s):  
Zhongchao Lin ◽  
Yu Zhang ◽  
Shugang Jiang ◽  
Xunwang Zhao ◽  
Jingyan Mo

The parallel higher-order Method of Moments based on message passing interface (MPI) has been successfully used to analyze the changes in radiation patterns of a microstrip patch array antenna mounted on different positions of an airplane. The block-partitioned scheme for the large dense MoM matrix and a block-cyclic matrix distribution scheme are designed to achieve excellent load balance and high parallel efficiency. Numerical results demonstrate that the rigorous parallel Method of Moments can efficiently and accurately solve large complex electromagnetic problems with composite structures.


Author(s):  
Aaron Young ◽  
Jay Taves ◽  
Asher Elmquist ◽  
Simone Benatti ◽  
Alessandro Tasora ◽  
...  

Abstract We describe a simulation environment that enables the design and testing of control policies for off-road mobility of autonomous agents. The environment is demonstrated in conjunction with the training and assessment of a reinforcement learning policy that uses sensor fusion and inter-agent communication to enable the movement of mixed convoys of human-driven and autonomous vehicles. Policies learned on rigid terrain are shown to transfer to hard (silt-like) and soft (snow-like) deformable terrains. The environment described performs the following: multi-vehicle multibody dynamics co-simulation in a time/space-coherent infrastructure that relies on the Message Passing Interface standard for low-latency parallel computing; sensor simulation (e.g., camera, GPU, IMU); simulation of a virtual world that can be altered by the agents present in the simulation; training that uses reinforcement learning to 'teach' the autonomous vehicles to drive in an obstacle-riddled course. The software stack described is open source.


2016 ◽  
Vol 138 (11) ◽  
Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and require a lot of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations optimally. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) that quantifies the merit of making new objective evaluations. In this work, we reformulate the expected improvement (EI) IAF to filter out parametric and measurement uncertainties. We bypass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters, and we employ a fully Bayesian interpretation of Gaussian processes (GPs) by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo (MCMC) to increase the methods robustness. Also, our approach quantifies the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty and demonstrate it by solving the oil-well placement problem (OWPP) with uncertainties in the permeability field and the oil price time series.


2019 ◽  
Vol 06 (04) ◽  
pp. 423-437
Author(s):  
Piotr Jędrzejowicz ◽  
Ewa Ratajczak-Ropel

In this paper, a multi-agent system (MAS) based on the A-Team concept is proposed to solve the Distributed Resource-Constrained Multi-Project Scheduling Problem (DRCMPSP). In the DRCMPSP, multiple distributed projects are considered. Hence, the local task schedule for each project and a coordination of the shared decisions are considered. The DRCMPSP belongs to the class of the strongly NP-hard optimization problems. Multi-agent system seems the natural way of solving such problems. The A-Team MAS, proposed in this paper, has been built using the JABAT environment where two types of the optimization agents are used: local and global. Local optimization agents are used to find solutions for the local projects, and global optimization agents are responsible for the coordination of the local projects and for finding the global solutions. The approach has been tested experimentally using 140 benchmark problem instances from MPSPLIB library with minimizing the Average Project Delay (APD) as global optimization criterion.


2019 ◽  
Vol 10 (2) ◽  
pp. 3-31
Author(s):  
Kirill Vladimirovich Pushkaryov

A hybrid method of global optimization NNAICM-PSO is presented. It uses neural network approximation of inverse mappings of objective function values to coordinates combined with particle swarm optimization to find the global minimum of a continuous objective function of multiple variables with bound constraints. The objective function is viewed as a black box. The method employs groups of moving probe points attracted by goals like in particle swarm optimization. One of the possible goals is determined via mapping of decreased objective function values to coordinates by modified Dual Generalized Regression Neural Networks constructed from probe points. The parameters of the search are controlled by an evolutionary algorithm. The algorithm forms a population of evolving rules each containing a tuple of parameter values. There are two measures of fitness: short-term (charm) and long-term (merit). Charm is used to select rules for reproduction and application. Merit determines survival of an individual. This two-fold system preserves potentially useful individuals from extinction due to short-term situation changes. Test problems of 100 variables were solved. The results indicate that evolutionary control is better than random variation of parameters for NNAICM-PSO. With some problems, when rule bases are reused, error progressively decreases in subsequent runs, which means that the method adapts to the problem.


Author(s):  
Piyush Pandita ◽  
Ilias Bilionis ◽  
Jitesh Panchal

Design optimization under uncertainty is notoriously difficult when the objective function is expensive to evaluate. State-of-the-art techniques, e.g., stochastic optimization or sampling average approximation, fail to learn exploitable patterns from collected data and, as a result, they tend to require an excessive number of objective function evaluations. There is a need for techniques that alleviate the high cost of information acquisition and select sequential simulations in an optimal way. In the field of deterministic single-objective unconstrained global optimization, the Bayesian global optimization (BGO) approach has been relatively successful in addressing the information acquisition problem. BGO builds a probabilistic surrogate of the expensive objective function and uses it to define an information acquisition function (IAF) whose role is to quantify the merit of making new objective evaluations. Specifically, BGO iterates between making the observations with the largest expected IAF and rebuilding the probabilistic surrogate, until a convergence criterion is met. In this work, we extend the expected improvement (EI) IAF to the case of design optimization under uncertainty. This involves a reformulation of the EI policy that is able to filter out parametric and measurement uncertainties. We by-pass the curse of dimensionality, since the method does not require learning the response surface as a function of the stochastic parameters. To increase the robustness of our approach in the low sample regime, we employ a fully Bayesian interpretation of Gaussian processes by constructing a particle approximation of the posterior of its hyperparameters using adaptive Markov chain Monte Carlo. An addendum of our approach is that it can quantify the epistemic uncertainty on the location of the optimum and the optimal value as induced by the limited number of objective evaluations used in obtaining it. We verify and validate our approach by solving two synthetic optimization problems under uncertainty. We demonstrate our approach by solving a challenging engineering problem: the oil-well-placement problem with uncertainties in the permeability field and the oil price time series.


2014 ◽  
Vol 2014 ◽  
pp. 1-12 ◽  
Author(s):  
Seif-Eddeen K. Fateen ◽  
Adrián Bonilla-Petriciolet

One of the major advantages of stochastic global optimization methods is the lack of the need of the gradient of the objective function. However, in some cases, this gradient is readily available and can be used to improve the numerical performance of stochastic optimization methods specially the quality and precision of global optimal solution. In this study, we proposed a gradient-based modification to the cuckoo search algorithm, which is a nature-inspired swarm-based stochastic global optimization method. We introduced the gradient-based cuckoo search (GBCS) and evaluated its performance vis-à-vis the original algorithm in solving twenty-four benchmark functions. The use of GBCS improved reliability and effectiveness of the algorithm in all but four of the tested benchmark problems. GBCS proved to be a strong candidate for solving difficult optimization problems, for which the gradient of the objective function is readily available.


Sign in / Sign up

Export Citation Format

Share Document