maze problem
Recently Published Documents


TOTAL DOCUMENTS

28
(FIVE YEARS 3)

H-INDEX

5
(FIVE YEARS 0)

2018 ◽  
Vol 11 (4) ◽  
pp. 321-330
Author(s):  
Fumito UWANO ◽  
Naoki TATEBE ◽  
Yusuke TAJIMA ◽  
Masaya NAKATA ◽  
Tim KOVACS ◽  
...  

2017 ◽  
Author(s):  
Raphael Kaplan ◽  
Karl J Friston

AbstractThis paper introduces an active inference formulation of planning and navigation. It illustrates how the exploitation–exploration dilemma is dissolved by acting to minimise uncertainty (i.e., expected surprise or free energy). We use simulations of a maze problem to illustrate how agents can solve quite complicated problems using context sensitive prior preferences to form subgoals. Our focus is on how epistemic behaviour – driven by novelty and the imperative to reduce uncertainty about the world – contextualises pragmatic or goal-directed behaviour. Using simulations, we illustrate the underlying process theory with synthetic behavioural and electrophysiological responses during exploration of a maze and subsequent navigation to a target location. An interesting phenomenon that emerged from the simulations was a putative distinction between ‘place cells’ – that fire when a subgoal is reached – and ‘path cells’ – that fire until a subgoal is reached.


Author(s):  
Fumito Uwano ◽  
◽  
Keiki Takadama

This study discusses important factors for zero communication, multi-agent cooperation by comparing different modified reinforcement learning methods. The two learning methods used for comparison were assigned different goal selections for multi-agent cooperation tasks. The first method is called Profit Minimizing Reinforcement Learning (PMRL); it forces agents to learn how to reach the farthest goal, and then the agent closest to the goal is directed to the goal. The second method is called Yielding Action Reinforcement Learning (YARL); it forces agents to learn through a Q-learning process, and if the agents have a conflict, the agent that is closest to the goal learns to reach the next closest goal. To compare the two methods, we designed experiments by adjusting the following maze factors: (1) the location of the start point and goal; (2) the number of agents; and (3) the size of maze. The intensive simulations performed on the maze problem for the agent cooperation task revealed that the two methods successfully enabled the agents to exhibit cooperative behavior, even if the size of the maze and the number of agents change. The PMRL mechanism always enables the agents to learn cooperative behavior, whereas the YARL mechanism makes the agents learn cooperative behavior over a small number of learning iterations. In zero communication, multi-agent cooperation, it is important that only agents that have a conflict cooperate with each other.


2017 ◽  
Vol 6 (2) ◽  
pp. 57 ◽  
Author(s):  
Hirofumi Miyajima ◽  
Noritaka Shigei ◽  
Syunki Makino ◽  
Hiromi Miyajima ◽  
Yohtaro Miyanishi ◽  
...  

Many studies have been done with the security of cloud computing. Though data encryption is a typical approach, high computing complexity for encryption and decryption of data is needed. Therefore, safe system for distributed processing with secure data attracts attention, and a lot of studies have been done. Secure multiparty computation (SMC) is one of these methods. Specifically, two learning methods for machine learning (ML) with SMC are known. One is to divide learning data into several subsets and perform learning. The other is to divide each item of learning data and perform learning. So far, most of works for ML with SMC are ones with supervised and unsupervised learning such as BP and K-means methods. It seems that there does not exist any studies for reinforcement learning (RL) with SMC. This paper proposes learning methods with SMC for Q-learning which is one of typical methods for RL. The effectiveness of proposed methods is shown by numerical simulation for the maze problem.


Sign in / Sign up

Export Citation Format

Share Document