scholarly journals Ant Colony Solving Multiple Constraints Problem: Vehicle Route Allocation

Author(s):  
Sorin C. Negulescu ◽  
Claudiu V. Kifor ◽  
Constantin Oprean

Ant colonies are successfully used nowadays as multi-agent systems (MAS) to solve difficult optimization problems such as travelling salesman (TSP), quadratic assignment (QAP), vehicle routing (VRP), graph coloring and satisfiability problem. The objective of the research presented in this paper is to adapt an improved version of Ant Colony Optimisation (ACO) algorithm, mainly: the Elitist Ant System (EAS) algorithm in order to solve the Vehicle Route Allocation Problem (VRAP). After a brief introduction in the first section about MAS and their characteristics, the paper presents the rationale within the second section where ACO algorithm and its common extensions are described. In the approach (the third section) are explained the steps that must be followed in order to adapt EAS for solving the VRAP. The resulted algorithm is illustrated in the fourth section. Section five closes the paper presenting the conclusions and intentions.

2014 ◽  
Vol 548-549 ◽  
pp. 1206-1212
Author(s):  
Sevda Dayıoğlu Gülcü ◽  
Şaban Gülcü ◽  
Humar Kahramanli

Recently some studies have been revealed by inspiring from animals which live as colonies in the nature. Ant Colony System is one of these studies. This system is a meta-heuristic method which has been developed based upon food searching characteristics of the ant colonies. Ant Colony System is applied in a lot of discrete optimization problems such as travelling salesman problem. In this study solving the travelling salesman problem using ant colony system is aimed.


Author(s):  
Thanet Satukitchai ◽  
Kietikul Jearanaitanakij

Ant Colony Optimization (ACO) is a famous technique for solving the Travelling Salesman Problem (TSP.) The first implementation of ACO is Ant System. Itcan be used to solve different combinatorial optimization problems, e.g., TSP, job-shop scheduling, quadratic assignment. However, one of its disadvantages is that it can be easily trapped into local optima. Although there is an attempt by Ant Colony System (ACS) to improve the local optima by introducing local pheromone updating rule, the chance of being trapped into local optima still persists. This paper presents an extension of ACS algorithm by modifying the construction solution phase of the algorithm, the phase that ants move and build their tours, to reduce the duplication of tours produced by ants. This modification forces ants to select unique path which has never been visited by other ants in the current iteration. As a result, the modified ACS can explore more search space than the conventional ACS. The experimental results on five standard benchmarks from TSPLIB show improvements on both the quality and the number of optimal solutions founded.


2013 ◽  
Vol 10 (3) ◽  
pp. 125-132 ◽  
Author(s):  
Lu Wang ◽  
Zhiliang Wang ◽  
Siquan Hu ◽  
Lei Liu

Games ◽  
2021 ◽  
Vol 12 (1) ◽  
pp. 8
Author(s):  
Gustavo Chica-Pedraza ◽  
Eduardo Mojica-Nava ◽  
Ernesto Cadena-Muñoz

Multi-Agent Systems (MAS) have been used to solve several optimization problems in control systems. MAS allow understanding the interactions between agents and the complexity of the system, thus generating functional models that are closer to reality. However, these approaches assume that information between agents is always available, which means the employment of a full-information model. Some tendencies have been growing in importance to tackle scenarios where information constraints are relevant issues. In this sense, game theory approaches appear as a useful technique that use a strategy concept to analyze the interactions of the agents and achieve the maximization of agent outcomes. In this paper, we propose a distributed control method of learning that allows analyzing the effect of the exploration concept in MAS. The dynamics obtained use Q-learning from reinforcement learning as a way to include the concept of exploration into the classic exploration-less Replicator Dynamics equation. Then, the Boltzmann distribution is used to introduce the Boltzmann-Based Distributed Replicator Dynamics as a tool for controlling agents behaviors. This distributed approach can be used in several engineering applications, where communications constraints between agents are considered. The behavior of the proposed method is analyzed using a smart grid application for validation purposes. Results show that despite the lack of full information of the system, by controlling some parameters of the method, it has similar behavior to the traditional centralized approaches.


Energies ◽  
2018 ◽  
Vol 11 (8) ◽  
pp. 1928 ◽  
Author(s):  
Alfonso González-Briones ◽  
Fernando De La Prieta ◽  
Mohd Mohamad ◽  
Sigeru Omatu ◽  
Juan Corchado

This article reviews the state-of-the-art developments in Multi-Agent Systems (MASs) and their application to energy optimization problems. This methodology and related tools have contributed to changes in various paradigms used in energy optimization. Behavior and interactions between agents are key elements that must be understood in order to model energy optimization solutions that are robust, scalable and context-aware. The concept of MAS is introduced in this paper and it is compared with traditional approaches in the development of energy optimization solutions. The different types of agent-based architectures are described, the role played by the environment is analysed and we look at how MAS recognizes the characteristics of the environment to adapt to it. Moreover, it is discussed how MAS can be used as tools that simulate the results of different actions aimed at reducing energy consumption. Then, we look at MAS as a tool that makes it easy to model and simulate certain behaviors. This modeling and simulation is easily extrapolated to the energy field, and can even evolve further within this field by using the Internet of Things (IoT) paradigm. Therefore, we can argue that MAS is a widespread approach in the field of energy optimization and that it is commonly used due to its capacity for the communication, coordination, cooperation of agents and the robustness that this methodology gives in assigning different tasks to agents. Finally, this article considers how MASs can be used for various purposes, from capturing sensor data to decision-making. We propose some research perspectives on the development of electrical optimization solutions through their development using MASs. In conclusion, we argue that researchers in the field of energy optimization should use multi-agent systems at those junctures where it is necessary to model energy efficiency solutions that involve a wide range of factors, as well as context independence that they can achieve through the addition of new agents or agent organizations, enabling the development of energy-efficient solutions for smart cities and intelligent buildings.


Author(s):  
Santanu Dam ◽  
Gopa Mandal ◽  
Kousik Dasgupta ◽  
Parmartha Dutta

This book chapter proposes use of Ant Colony Optimization (ACO), a novel computational intelligence technique for balancing loads of virtual machine in cloud computing. Computational intelligence(CI), includes study of designing bio-inspired artificial agents for finding out probable optimal solution. So the central goal of CI can be said as, basic understanding of the principal, which helps to mimic intelligent behavior from the nature for artifact systems. Basic strands of ACO is to design an intelligent multi-agent systems imputed by the collective behavior of ants. From the perspective of operation research, it's a meta-heuristic. Cloud computing is a one of the emerging technology. It's enables applications to run on virtualized resources over the distributed environment. Despite these still some problems need to be take care, which includes load balancing. The proposed algorithm tries to balance loads and optimize the response time by distributing dynamic workload in to the entire system evenly.


2006 ◽  
Vol 21 (3) ◽  
pp. 231-238 ◽  
Author(s):  
JIM DOWLING ◽  
RAYMOND CUNNINGHAM ◽  
EOIN CURRAN ◽  
VINNY CAHILL

This paper presents Collaborative Reinforcement Learning (CRL), a coordination model for online system optimization in decentralized multi-agent systems. In CRL system optimization problems are represented as a set of discrete optimization problems, each of whose solution cost is minimized by model-based reinforcement learning agents collaborating on their solution. CRL systems can be built to provide autonomic behaviours such as optimizing system performance in an unpredictable environment and adaptation to partial failures. We evaluate CRL using an ad hoc routing protocol that optimizes system routing performance in an unpredictable network environment.


2019 ◽  
Vol 09 (4) ◽  
pp. 100-111
Author(s):  
T.V. Sivakova ◽  
V.A. Sudakov

The article explores the use of multi-agent technologies for solving optimization problems. It is shown how multi-agent systems allow working with restrictions in a distributed computing environment. The task of scheduling is formalized. Software was developed and computational experiments were carried out, which showed the effectiveness of the proposed approach.


Sign in / Sign up

Export Citation Format

Share Document