Towards Finding a High Level Formulation for Optimization Problems

2013 ◽  
Vol 1 (2) ◽  
pp. 25
Author(s):  
Reza Rafeh
Author(s):  
Breno A. de Melo Menezes ◽  
Nina Herrmann ◽  
Herbert Kuchen ◽  
Fernando Buarque de Lima Neto

AbstractParallel implementations of swarm intelligence algorithms such as the ant colony optimization (ACO) have been widely used to shorten the execution time when solving complex optimization problems. When aiming for a GPU environment, developing efficient parallel versions of such algorithms using CUDA can be a difficult and error-prone task even for experienced programmers. To overcome this issue, the parallel programming model of Algorithmic Skeletons simplifies parallel programs by abstracting from low-level features. This is realized by defining common programming patterns (e.g. map, fold and zip) that later on will be converted to efficient parallel code. In this paper, we show how algorithmic skeletons formulated in the domain specific language Musket can cope with the development of a parallel implementation of ACO and how that compares to a low-level implementation. Our experimental results show that Musket suits the development of ACO. Besides making it easier for the programmer to deal with the parallelization aspects, Musket generates high performance code with similar execution times when compared to low-level implementations.


VLSI Design ◽  
2012 ◽  
Vol 2012 ◽  
pp. 1-11
Author(s):  
M. Walton ◽  
O. Ahmed ◽  
G. Grewal ◽  
S. Areibi

Scatter Search is an effective and established population-based metaheuristic that has been used to solve a variety of hard optimization problems. However, the time required to find high-quality solutions can become prohibitive as problem sizes grow. In this paper, we present a hardware implementation of Scatter Search on a field-programmable gate array (FPGA). Our objective is to improve the run time of Scatter Search by exploiting the potentially massive performance benefits that are available through the native parallelism in hardware. When implementing Scatter Search we employ two different high-level languages (HLLs): Handel-C and Impulse-C. Our empirical results show that by effectively exploiting source-code optimizations, data parallelism, and pipelining, a 28x speed up over software can be achieved.


2020 ◽  
Vol 1 (1) ◽  
Author(s):  
Chi Jin ◽  
Anson Maitland ◽  
John McPhee

Abstract In this paper, we address nonlinear moving horizon estimation (NMHE) of vehicle lateral speed, as well as the road friction coefficient, using measured signals from sensors common to modern series-production automobiles. Due to nonlinear vehicle dynamics, a standard nonlinear moving horizon formulation leads to non-convex optimization problems, and numerical optimization algorithms can be trapped in undesirable local minima, leading to incorrect solutions. To address the challenge of non-convex cost functions, we propose an estimator with a two-level hierarchy. At the high level, a grid search combined with numerical optimization aims to find reference estimates that are sufficiently close to the global optimum. The reference estimates are refined at the low level leading to high-precision solutions. Our algorithm ensures that the estimates converge to the true values for the nominal model without the need for accurate initialization. Our design is tested in simulation with both the nominal model as well as a high-fidelity model of Autonomoose, the self-driving car of the University of Waterloo.


2003 ◽  
Vol 13 (2) ◽  
pp. 139-151 ◽  
Author(s):  
Edmund Burke ◽  
Yuri Bykov ◽  
James Newall ◽  
Sanja Petrovic

A common weakness of local search metaheuristics, such as Simulated Annealing, in solving combinatorial optimization problems, is the necessity of setting a certain number of parameters. This tends to generate a significant increase in the total amount of time required to solve the problem and often requires a high level of experience from the user. This paper is motivated by the goal of overcoming this drawback by employing "parameter-free" techniques in the context of automatically solving course timetabling problems. We employ local search techniques with "straightforward" parameters, i.e. ones that an inexperienced user can easily understand. In particular, we present an extended variant of the "Great Deluge" algorithm, which requires only two parameters (which can be interpreted as search time and an estimation of the required level of solution quality). These parameters affect the performance of the algorithm so that a longer search provides a better result - as long as we can intelligently stop the approach from converging too early. Hence, a user can choose a balance between processing time and the quality of the solution. The proposed method has been tested on a range of university course timetabling problems and the results were evaluated within an International Timetabling Competition. The effectiveness of the proposed technique has been confirmed by a high level of quality of results. These results represented the third overall average rating among 21 participants and the best solutions on 8 of the 23 test problems. .


2021 ◽  
Vol 8 (4) ◽  
pp. 736-746
Author(s):  
O. Mellouli ◽  
◽  
I. Hafidi ◽  
A. Metrane ◽  
◽  
...  

Hyper-heuristics are a subclass of high-level research methods that function in a low-level heuristic research space. Their aim objective is to improve the level of generality for solving combinatorial optimization problems using two main components: a methodology for the heuristic selection and a move acceptance criterion, to ensure intensification and diversification [1]. Thus, rather than working directly on the problem's solutions and selecting one of them to proceed to the next step at each stage, hyper-heuristics operates on a low-level heuristic research space. The choice function is one of the hyper-heuristics that have proven their efficiency in solving combinatorial optimization problems [2–4]. At each iteration, the selection of heuristics is dependent on a score calculated by combining three different measures to guarantee both intensification and diversification for the heuristics choice process. The heuristic with the highest score is therefore chosen to be applied to the problem. Therefore, the key to the success of the choice function is to choose the correct weight parameters of its three measures. In this study, we make a state of the art in hyper-heuristic research and propose a new method that automatically controls these weight parameters based on the Boltzmann function. The results obtained from its application on five problem domains are compared with those of the standard, modified choice function proposed by Drake et al. [2,3].


Author(s):  
Julien Lepagnot ◽  
Lhassane Idoumghar ◽  
Mathieu Brévilliers ◽  
Maha Idrissi-Aouad

2008 ◽  
Vol 18 (01) ◽  
pp. 133-147
Author(s):  
IGNACIO PELÁEZ ◽  
FRANCISCO ALMEIDA ◽  
DANIEL GONZÁLEZ

Dynamic Programming is an important problem-solving technique used for solving a wide variety of optimization problems. Dynamic Programming programs are commonly designed as individual applications and software tools are usually tailored to specific classes of recurrences and methodologies. That contrasts with some other algorithmic techniques where a single generic program may solve all the instances. We have developed a general skeleton tool providing support for a wide range of dynamic programming methodologies on different parallel architectures. Genericity, flexibility and efficiency are basic issues of the design strategy. Parallelism is supplied to the user in a transparent manner through a common sequential interface. A set of test problems representative of different classes of Dynamic Programming formulations has been used to validate our skeleton on an IBM-SP.


2018 ◽  
Vol 2 (2) ◽  
pp. 2-13 ◽  
Author(s):  
P. V. Santos ◽  
José Carlos Alves ◽  
João Canas Ferreira

In this work we present a reconfigurable and scalable custom processor array for solving optimization problems using cellular genetic algorithms (cGAs), based on a regular fabric of processing nodes and local memories. Cellular genetic algorithms are a variant of the well-known genetic algorithm that can conveniently exploit the coarse-grain parallelism afforded by this architecture. To ease the design of the proposed computing engine for solving different optimization problems, a high-level synthesis design flow is proposed, where the problem-dependent operations of the algorithm are specified in C++ and synthesized to custom hardware. A spectrum allocation problem was used as a case study and successfully implemented in a Virtex-6 FPGA device, showing relevant figures for the computing acceleration.


Sign in / Sign up

Export Citation Format

Share Document