RRL-GAT: Graph Attention Network-driven Multi-Label Image Robust Representation Learning

Author(s):  
Bin Hu ◽  
Kehua Guo ◽  
Xiaokang Wang ◽  
Jian Zhang ◽  
Di Zhou
2020 ◽  
Vol 34 (10) ◽  
pp. 13811-13812
Author(s):  
Yueyue Hu ◽  
Shiliang Sun ◽  
Xin Xu ◽  
Jing Zhao

The representation approximated by a single deep network is usually limited for reinforcement learning agents. We propose a novel multi-view deep attention network (MvDAN), which introduces multi-view representation learning into the reinforcement learning task for the first time. The proposed model approximates a set of strategies from multiple representations and combines these strategies based on attention mechanisms to provide a comprehensive strategy for a single-agent. Experimental results on eight Atari video games show that the MvDAN has effective competitive performance than single-view reinforcement learning methods.


2020 ◽  
Vol 34 (05) ◽  
pp. 7236-7243
Author(s):  
Heechang Ryu ◽  
Hayong Shin ◽  
Jinkyoo Park

Most previous studies on multi-agent reinforcement learning focus on deriving decentralized and cooperative policies to maximize a common reward and rarely consider the transferability of trained policies to new tasks. This prevents such policies from being applied to more complex multi-agent tasks. To resolve these limitations, we propose a model that conducts both representation learning for multiple agents using hierarchical graph attention network and policy learning using multi-agent actor-critic. The hierarchical graph attention network is specially designed to model the hierarchical relationships among multiple agents that either cooperate or compete with each other to derive more advanced strategic policies. Two attention networks, the inter-agent and inter-group attention layers, are used to effectively model individual and group level interactions, respectively. The two attention networks have been proven to facilitate the transfer of learned policies to new tasks with different agent compositions and allow one to interpret the learned strategies. Empirically, we demonstrate that the proposed model outperforms existing methods in several mixed cooperative and competitive tasks.


2019 ◽  
Author(s):  
Minh C. Phan ◽  
Aixin Sun ◽  
Yi Tay

Author(s):  
Bijay Gaudel ◽  
Donghai Guan ◽  
Weiwei Yuan ◽  
Deepanjal Shrestha ◽  
Bing Chen ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document