multiagent environments
Recently Published Documents


TOTAL DOCUMENTS

37
(FIVE YEARS 0)

H-INDEX

10
(FIVE YEARS 0)

Author(s):  
Keiki Takadama ◽  
Kazuteru Miyazaki

Machine learning has been attracting significant attention again since the potential of deep learning was recognized. Not only has machine learning been improved, but it has also been integrated with “reinforcement learning,” revealing other potential applications, e.g., deep Q-networks (DQN) and AlphaGO proposed by Google DeepMind. It is against this background that this special issue, “Cutting Edge of Reinforcement Learning and its Hybrid Methods,” focuses on both reinforcement learning and its hybrid methods, including reinforcement learning with deep learning or evolutionary computation, to explore new potentials of reinforcement learning.Of the many contributions received, we finally selected 13 works for publication. The first three propose hybrids of deep learning and reinforcement learning for single agent environments, which include the latest research results in the areas of convolutional neural networks and DQN. The fourth through seventh works are related to the Learning Classifier System, which integrates evolutionary computation and reinforcement learning to develop the rule discovery mechanism. The eighth and ninth works address problems related to goal design or the reward, an issue that is particularly important to the application of reinforcement learning. The last four contributions deal with multiagent environments.These works cover a wide range of studies, from the expansion of techniques incorporating simultaneous learning to applications in multiagent environments. All works are on the cutting edge of reinforcement learning and its hybrid methods. We hope that this special issue constitutes a large contribution to the development of the reinforcement learning field.


2012 ◽  
Vol 21 (01) ◽  
pp. 1250003 ◽  
Author(s):  
PIERRICK PLAMONDON ◽  
BRAHIM CHAIB-DRAA

This paper contributes to solve effectively stochastic resource allocation problems in multiagent environments. To address it, a distributed Q-values approach is proposed when the resources are distributed among agents a priori, but the actions made by an agent may influence the reward obtained by at least another agent. This distributed Q-values approach allows to coordinate agents' reward and thus permits to reduce the set of states and actions to consider. On the other hand, when the resources are available to all agents, no distributed Q-values is possible and tight lower and upper bounds are proposed for existing heuristic search algorithms. Our experimental results demonstrate the efficiency of our distributed Q-values in terms of planning time as well as our tight bounds in terms of fast convergence and reduction of backups.


Author(s):  
G. Christian M. Quintero ◽  
A. Francisco R. Bertel ◽  
E. Daniel P. Maldonado

Sign in / Sign up

Export Citation Format

Share Document