scholarly journals Tensor decomposition for multi-agent predictive state representation

2021 ◽  
pp. 115969
Author(s):  
Biyang Ma ◽  
Bilian Chen ◽  
Yifeng Zeng ◽  
Jing Tang ◽  
Langcai Cao
2020 ◽  
Vol 20 (5) ◽  
pp. 593-608
Author(s):  
ALESSANDRO BURIGANA ◽  
FRANCESCO FABIANO ◽  
AGOSTINO DOVIER ◽  
ENRICO PONTELLI

AbstractDesigning agents that reason and act upon the world has always been one of the main objectives of the Artificial Intelligence community. While for planning in “simple” domains the agents can solely rely on facts about the world, in several contexts, e.g., economy, security, justice and politics, the mere knowledge of the world could be insufficient to reach a desired goal. In these scenarios, epistemic reasoning, i.e., reasoning about agents’ beliefs about themselves and about other agents’ beliefs, is essential to design winning strategies. This paper addresses the problem of reasoning in multi-agent epistemic settings exploiting declarative programming techniques. In particular, the paper presents an actual implementation of a multi-shot Answer Set Programming-based planner that can reason in multi-agent epistemic settings, called PLATO (ePistemic muLti-agent Answer seT programming sOlver). The ASP paradigm enables a concise and elegant design of the planner, w.r.t. other imperative implementations, facilitating the development of formal verification of correctness. The paper shows how the planner, exploiting an ad-hoc epistemic state representation and the efficiency of ASP solvers, has competitive performance results on benchmarks collected from the literature.


2020 ◽  
Vol 67 (7) ◽  
pp. 2052-2063 ◽  
Author(s):  
Pierre Humbert ◽  
Clement Dubost ◽  
Julien Audiffren ◽  
Laurent Oudre

2020 ◽  
Author(s):  
Thomas Akam ◽  
Mark Walton

Experiments have implicated dopamine in model-based reinforcement learning (RL). These findings are unexpected as dopamine is thought to encode a reward prediction error (RPE), which is the key teaching signal in model-free RL. Here we examine two possible accounts for dopamine’s involvement in model-based RL: the first that dopamine neurons carry a prediction error used to update a type of predictive state representation called a successor representation, the second that two well established aspects of dopaminergic activity, RPEs and surprise signals, can together explain dopamine’s involvement in model-based RL.


Sign in / Sign up

Export Citation Format

Share Document