An Approach for Fault Tolerance in Multi-Agent Systems using Learning Agents

2015 ◽  
Vol 11 (3) ◽  
pp. 30-44
Author(s):  
Mounira Bouzahzah ◽  
Ramdane Maamri

Through this paper, the authors propose a new approach to get fault tolerant multi-agent systems using learning agents. Generally, the exceptions in the multi-agent system are divided into two main groups: private exceptions that are treated directly by the agents and global exceptions that combine all unexpected exceptions that need handlers to be solved. The proposed approach solves the problem of these global exceptions using learning agents. This work uses a formal model called hierarchical plans to model the activities of the system's agents in order to facilitate the exception detection and to model the communication with the learning agent. This latter uses a modified version of the Q Learning Algorithm in order to choose which handler can be used to solve an exceptions. The paper tries to give a new direction in the field of fault tolerance in multi-agent systems by using learning agents, the proposed solution makes it possible to adapt the handler used in case of failure within the context changes and treat repeated exceptions using learning agent experiences.

2012 ◽  
Vol 566 ◽  
pp. 572-579
Author(s):  
Abdolkarim Niazi ◽  
Norizah Redzuan ◽  
Raja Ishak Raja Hamzah ◽  
Sara Esfandiari

In this paper, a new algorithm based on case base reasoning and reinforcement learning (RL) is proposed to increase the convergence rate of the reinforcement learning algorithms. RL algorithms are very useful for solving wide variety decision problems when their models are not available and they must make decision correctly in every state of system, such as multi agent systems, artificial control systems, robotic, tool condition monitoring and etc. In the propose method, we investigate how making improved action selection in reinforcement learning (RL) algorithm. In the proposed method, the new combined model using case base reasoning systems and a new optimized function is proposed to select the action, which led to an increase in algorithms based on Q-learning. The algorithm mentioned was used for solving the problem of cooperative Markov’s games as one of the models of Markov based multi-agent systems. The results of experiments Indicated that the proposed algorithms perform better than the existing algorithms in terms of speed and accuracy of reaching the optimal policy.


Respuestas ◽  
2018 ◽  
Vol 23 (2) ◽  
pp. 53-61
Author(s):  
David Luviano Cruz ◽  
Francesco José García Luna ◽  
Luis Asunción Pérez Domínguez

This paper presents a hybrid control proposal for multi-agent systems, where the advantages of the reinforcement learning and nonparametric functions are exploited. A modified version of the Q-learning algorithm is used which will provide data training for a Kernel, this approach will provide a sub optimal set of actions to be used by the agents. The proposed algorithm is experimentally tested in a path generation task in an unknown environment for mobile robots.


Author(s):  
Saeedeh Ghanadbashi ◽  
Fatemeh Golpayegani

AbstractIn multi-agent systems, goal achievement is challenging when agents operate in ever-changing environments and face unseen situations, where not all the goals are known or predefined. In such cases, agents need to identify the changes and adapt their behaviour, by evolving their goals or even generating new goals to address the emerging requirements. Learning and practical reasoning techniques have been used to enable agents with limited knowledge to adapt to new circumstances. However, they depend on the availability of large amounts of data, require long exploration periods, and cannot help agents to set new goals. Furthermore, the accuracy of agents’ actions is improved by introducing added intelligence through integrating conceptual features extracted from ontologies. However, the concerns related to taking suitable actions when unseen situations occur are not addressed. This paper proposes a new Automatic Goal Generation Model (AGGM) that enables agents to create new goals to handle unseen situations and to adapt to their ever-changing environment on a real-time basis. AGGM is compared to Q-learning, SARSA, and Deep Q Network in a Traffic Signal Control System case study. The results show that AGGM outperforms the baseline algorithms in unseen situations while handling the seen situations as well as the baseline algorithms.


Automatica ◽  
2021 ◽  
Vol 128 ◽  
pp. 109576
Author(s):  
Tao Feng ◽  
Jilie Zhang ◽  
Yin Tong ◽  
Huaguang Zhang

2020 ◽  
Vol 53 (2) ◽  
pp. 4076-4081
Author(s):  
Shahram Hajshirmohamadi ◽  
Farid Sheikholeslam ◽  
Nader Meskin ◽  
Jawhar Ghommam

Sign in / Sign up

Export Citation Format

Share Document