An Agent-Based Self-Adaptive Mechanism with  Reinforcement Learning

Author(s):  
Danni Yu ◽  
Qingshan Li ◽  
Lu Wang ◽  
Yishuai Lin
Author(s):  
Xinjun Mao ◽  
Menggao Dong ◽  
Haibin Zhu

Development of self-adaptive systems situated in open and uncertain environments is a great challenge in the community of software engineering due to the unpredictability of environment changes and the variety of self-adaptation manners. Explicit specification of expected changes and various self-adaptations at design-time, an approach often adopted by developers, seems ineffective. This paper presents an agent-based approach that combines two-layer self-adaptation mechanisms and reinforcement learning together to support the development and running of self-adaptive systems. The approach takes self-adaptive systems as multi-agent organizations and enables the agent itself to make decisions on self-adaptation by learning at run-time and at different levels. The proposed self-adaptation mechanisms that are based on organization metaphors enable self-adaptation at two layers: fine-grain behavior level and coarse-grain organization level. Corresponding reinforcement learning algorithms on self-adaptation are designed and integrated with the two-layer self-adaptation mechanisms. This paper further details developmental technologies, based on the above approach, in establishing self-adaptive systems, including extended software architecture for self-adaptation, an implementation framework, and a development process. A case study and experiment evaluations are conducted to illustrate the effectiveness of the proposed approach.


2015 ◽  
Vol 25 (3) ◽  
pp. 471-482 ◽  
Author(s):  
Bartłomiej Śnieżyński

AbstractIn this paper we propose a strategy learning model for autonomous agents based on classification. In the literature, the most commonly used learning method in agent-based systems is reinforcement learning. In our opinion, classification can be considered a good alternative. This type of supervised learning can be used to generate a classifier that allows the agent to choose an appropriate action for execution. Experimental results show that this model can be successfully applied for strategy generation even if rewards are delayed. We compare the efficiency of the proposed model and reinforcement learning using the farmer-pest domain and configurations of various complexity. In complex environments, supervised learning can improve the performance of agents much faster that reinforcement learning. If an appropriate knowledge representation is used, the learned knowledge may be analyzed by humans, which allows tracking the learning process


2014 ◽  
Vol 709 ◽  
pp. 227-233
Author(s):  
Rui Qin Guo ◽  
Wei Hu ◽  
Juan Liu ◽  
Song Lin

The singularity, namely, the output movement of the mechanism is uncertain due to kinematic bifurcation at the singular position, is the intrinsic characteristic of mechanism, resulting in movement out of control. To find a practical solution for this vexed question to make mechanism able to avoid its singular position is the greatest challenge in the field of mechanism. Based on the study of the planar four-bar linkage with singularity, the ideology of introducing self-adaptive mechanism theory into the solution of avoid singularity problem is firstly put forward in this paper. Using self-adaptive mechanism to complement the out of control parameters of singular position can make the mechanism have the ability to avoid singularity. Then the mechanism passes through the singular position smoothly with determined movement and load capability. Finally, the purpose of avoiding singularity is achieved.


2014 ◽  
Vol 6 (1) ◽  
pp. 65-85 ◽  
Author(s):  
Xinjun Mao ◽  
Menggao Dong ◽  
Haibin Zhu

Development of self-adaptive systems situated in open and uncertain environments is a great challenge in the community of software engineering due to the unpredictability of environment changes and the variety of self-adaptation manners. Explicit specification of expected changes and various self-adaptations at design-time, an approach often adopted by developers, seems ineffective. This paper presents an agent-based approach that combines two-layer self-adaptation mechanisms and reinforcement learning together to support the development and running of self-adaptive systems. The approach takes self-adaptive systems as multi-agent organizations and enables the agent itself to make decisions on self-adaptation by learning at run-time and at different levels. The proposed self-adaptation mechanisms that are based on organization metaphors enable self-adaptation at two layers: fine-grain behavior level and coarse-grain organization level. Corresponding reinforcement learning algorithms on self-adaptation are designed and integrated with the two-layer self-adaptation mechanisms. This paper further details developmental technologies, based on the above approach, in establishing self-adaptive systems, including extended software architecture for self-adaptation, an implementation framework, and a development process. A case study and experiment evaluations are conducted to illustrate the effectiveness of the proposed approach.


Author(s):  
Sotiris Papadopoulos ◽  
Francisco Baez ◽  
Jonathan Alt ◽  
Christian Darken

The Theory of Planned Behavior (TPB) provides a conceptual model for use in assessing behavioral intentions of humans. Agent based social simulations seek to represent the behavior of individuals in societies in order to understand the impact of a variety of interventions on the population in a given area. Previous work has described the implementation of the TPB in agent based social simulation using Bayesian networks. This paper describes the implementation of the TPB using novel learning techniques related to reinforcement learning. This paper provides case study results from an agent based simulation for behavior related to commodity consumption. Initial results demonstrate behavior more closely related to observable human behavior. This work contributes to the body of knowledge on adaptive learning behavior in agent based simulations.


Sign in / Sign up

Export Citation Format

Share Document