scholarly journals CODA Algorithm: An Immune Algorithm for Reinforcement Learning Tasks

Author(s):  
Daniel R. Ramirez Rebollo ◽  
Pedro Ponce Cruz ◽  
Arturo Molina

2021 ◽  
Vol 6 (1) ◽  
Author(s):  
Peter Morales ◽  
Rajmonda Sulo Caceres ◽  
Tina Eliassi-Rad

AbstractComplex networks are often either too large for full exploration, partially accessible, or partially observed. Downstream learning tasks on these incomplete networks can produce low quality results. In addition, reducing the incompleteness of the network can be costly and nontrivial. As a result, network discovery algorithms optimized for specific downstream learning tasks given resource collection constraints are of great interest. In this paper, we formulate the task-specific network discovery problem as a sequential decision-making problem. Our downstream task is selective harvesting, the optimal collection of vertices with a particular attribute. We propose a framework, called network actor critic (NAC), which learns a policy and notion of future reward in an offline setting via a deep reinforcement learning algorithm. The NAC paradigm utilizes a task-specific network embedding to reduce the state space complexity. A detailed comparative analysis of popular network embeddings is presented with respect to their role in supporting offline planning. Furthermore, a quantitative study is presented on various synthetic and real benchmarks using NAC and several baselines. We show that offline models of reward and network discovery policies lead to significantly improved performance when compared to competitive online discovery algorithms. Finally, we outline learning regimes where planning is critical in addressing sparse and changing reward signals.



2018 ◽  
Vol 18 (1&2) ◽  
pp. 51-74 ◽  
Author(s):  
Daniel Crawford ◽  
Anna Levit ◽  
Navid Ghadermarzy ◽  
Jaspreet S. Oberoi ◽  
Pooya Ronagh

We investigate whether quantum annealers with select chip layouts can outperform classical computers in reinforcement learning tasks. We associate a transverse field Ising spin Hamiltonian with a layout of qubits similar to that of a deep Boltzmann machine (DBM) and use simulated quantum annealing (SQA) to numerically simulate quantum sampling from this system. We design a reinforcement learning algorithm in which the set of visible nodes representing the states and actions of an optimal policy are the first and last layers of the deep network. In absence of a transverse field, our simulations show that DBMs are trained more effectively than restricted Boltzmann machines (RBM) with the same number of nodes. We then develop a framework for training the network as a quantum Boltzmann machine (QBM) in the presence of a significant transverse field for reinforcement learning. This method also outperforms the reinforcement learning method that uses RBMs.



Author(s):  
Haitham Bou Ammar ◽  
Decebal Constantin Mocanu ◽  
Matthew E. Taylor ◽  
Kurt Driessens ◽  
Karl Tuyls ◽  
...  




2019 ◽  
Author(s):  
Chih-Chung Ting ◽  
Stefano Palminteri ◽  
Jan B. Engelmann ◽  
Maël Lebreton

AbstractIn simple instrumental-learning tasks, humans learn to seek gains and to avoid losses equally well. Yet, two effects of valence are observed. First, decisions in loss-contexts are slower, which is consistent with the Pavlovian-instrumental transfer (PIT) hypothesis. Second, loss contexts decrease individuals’ confidence in their choices – a bias akin to a Pavlovian-to-metacognitive transfer (PMT). Whether these two effects are two manifestations of a single mechanism or whether they can be partially dissociated is unknown. Here, across six experiments, we attempted to disrupt the PIT effects by manipulating the mapping between decisions and actions and imposing constraints on response times (RTs). Our goal was to assess the presence of the metacognitive bias in the absence of the RT bias. Were observed both PIT and PMT despite our disruption attempts, establishing that the effects of valence on motor and metacognitive responses are very robust and replicable. Nonetheless, within- and between-individual inferences reveal that the confidence bias resists the disruption of the RT bias. Therefore, although concomitant in most cases, PMT and PIT seem to be – partly – dissociable. These results highlight new important mechanistic constraints that should be incorporated in learning models to jointly explain choice, reaction times and confidence.



Sign in / Sign up

Export Citation Format

Share Document