scholarly journals Using ontology to guide reinforcement learning agents in unseen situations

Author(s):  
Saeedeh Ghanadbashi ◽  
Fatemeh Golpayegani

AbstractIn multi-agent systems, goal achievement is challenging when agents operate in ever-changing environments and face unseen situations, where not all the goals are known or predefined. In such cases, agents need to identify the changes and adapt their behaviour, by evolving their goals or even generating new goals to address the emerging requirements. Learning and practical reasoning techniques have been used to enable agents with limited knowledge to adapt to new circumstances. However, they depend on the availability of large amounts of data, require long exploration periods, and cannot help agents to set new goals. Furthermore, the accuracy of agents’ actions is improved by introducing added intelligence through integrating conceptual features extracted from ontologies. However, the concerns related to taking suitable actions when unseen situations occur are not addressed. This paper proposes a new Automatic Goal Generation Model (AGGM) that enables agents to create new goals to handle unseen situations and to adapt to their ever-changing environment on a real-time basis. AGGM is compared to Q-learning, SARSA, and Deep Q Network in a Traffic Signal Control System case study. The results show that AGGM outperforms the baseline algorithms in unseen situations while handling the seen situations as well as the baseline algorithms.

2015 ◽  
Vol 11 (3) ◽  
pp. 30-44
Author(s):  
Mounira Bouzahzah ◽  
Ramdane Maamri

Through this paper, the authors propose a new approach to get fault tolerant multi-agent systems using learning agents. Generally, the exceptions in the multi-agent system are divided into two main groups: private exceptions that are treated directly by the agents and global exceptions that combine all unexpected exceptions that need handlers to be solved. The proposed approach solves the problem of these global exceptions using learning agents. This work uses a formal model called hierarchical plans to model the activities of the system's agents in order to facilitate the exception detection and to model the communication with the learning agent. This latter uses a modified version of the Q Learning Algorithm in order to choose which handler can be used to solve an exceptions. The paper tries to give a new direction in the field of fault tolerance in multi-agent systems by using learning agents, the proposed solution makes it possible to adapt the handler used in case of failure within the context changes and treat repeated exceptions using learning agent experiences.


2021 ◽  
Vol 10 (2) ◽  
pp. 27
Author(s):  
Roberto Casadei ◽  
Gianluca Aguzzi ◽  
Mirko Viroli

Research and technology developments on autonomous agents and autonomic computing promote a vision of artificial systems that are able to resiliently manage themselves and autonomously deal with issues at runtime in dynamic environments. Indeed, autonomy can be leveraged to unburden humans from mundane tasks (cf. driving and autonomous vehicles), from the risk of operating in unknown or perilous environments (cf. rescue scenarios), or to support timely decision-making in complex settings (cf. data-centre operations). Beyond the results that individual autonomous agents can carry out, a further opportunity lies in the collaboration of multiple agents or robots. Emerging macro-paradigms provide an approach to programming whole collectives towards global goals. Aggregate computing is one such paradigm, formally grounded in a calculus of computational fields enabling functional composition of collective behaviours that could be proved, under certain technical conditions, to be self-stabilising. In this work, we address the concept of collective autonomy, i.e., the form of autonomy that applies at the level of a group of individuals. As a contribution, we define an agent control architecture for aggregate multi-agent systems, discuss how the aggregate computing framework relates to both individual and collective autonomy, and show how it can be used to program collective autonomous behaviour. We exemplify the concepts through a simulated case study, and outline a research roadmap towards reliable aggregate autonomy.


2009 ◽  
Vol 90 (11) ◽  
pp. 3607-3615 ◽  
Author(s):  
Paolo C. Campo ◽  
Guillermo A. Mendoza ◽  
Philippe Guizol ◽  
Teodoro R. Villanueva ◽  
François Bousquet

2012 ◽  
Vol 566 ◽  
pp. 572-579
Author(s):  
Abdolkarim Niazi ◽  
Norizah Redzuan ◽  
Raja Ishak Raja Hamzah ◽  
Sara Esfandiari

In this paper, a new algorithm based on case base reasoning and reinforcement learning (RL) is proposed to increase the convergence rate of the reinforcement learning algorithms. RL algorithms are very useful for solving wide variety decision problems when their models are not available and they must make decision correctly in every state of system, such as multi agent systems, artificial control systems, robotic, tool condition monitoring and etc. In the propose method, we investigate how making improved action selection in reinforcement learning (RL) algorithm. In the proposed method, the new combined model using case base reasoning systems and a new optimized function is proposed to select the action, which led to an increase in algorithms based on Q-learning. The algorithm mentioned was used for solving the problem of cooperative Markov’s games as one of the models of Markov based multi-agent systems. The results of experiments Indicated that the proposed algorithms perform better than the existing algorithms in terms of speed and accuracy of reaching the optimal policy.


Author(s):  
Carole Bernon ◽  
Valérie Camps ◽  
Marie-Pierre Gleizes ◽  
Gauthier Picard

This chapter introduces the ADELFE methodology, an agent-oriented methodology dedicated to the design of systems that are complex, open, and not well-specified. The need for its development is justified by the theoretical background given in the first section, which also gives an overview of the concepts on which multi-agent systems developed with ADELFE are based. A methodology is composed of a process, a notation, and tools. Tools are presented in the second section and the process in the third one, using an information system case study to better visualize how to apply this process.


Author(s):  
Sofia Kouah ◽  
Djamel Eddine Saïdouni

For developing large dynamic systems in a rigorous manner, fuzzy labeled transition refinement tree (FLTRT for short) has been defined. This model provides a formal specification framework for designing such systems. In fact, it supports abstraction and enables fuzziness which allows a rigorous formal refinement process. The purpose of this paper is to illustrate the applicability of FLTRT for designing multi agent systems (MAS for short), among others collective and internal agent's behaviors. Therefore, Contract Net Protocol (CNP for short) is chosen as case study.


Author(s):  
Haibin Zhu ◽  
MengChu Zhou

Agent system design is a complex task challenging designers to simulate intelligent collaborative behavior. Roles can reduce the complexity of agent system design by categorizing the roles played by agents. The role concepts can also be used in agent systems to describe the collaboration among cooperative agents. In this chapter, we introduce roles as a means to support interaction and collaboration among agents in multi-agent systems. We review the application of roles in current agent systems at first, then describe the fundamental principles of role-based collaboration and propose the basic methodologies of how to apply roles into agent systems (i.e., the revised E-CARGO model). After that, we demonstrate a case study: a soccer robot team designed with role specifications. Finally, we present the potentiality to apply roles into information personalization.


Sign in / Sign up

Export Citation Format

Share Document