Imitation as a Mechanism of Cultural Transmission

2010 ◽  
Vol 16 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Chris Marriott ◽  
James Parker ◽  
Jörg Denzinger

We study the effects of an imitation mechanism on a population of animats capable of individual ontogenetic learning. An urge to imitate others augments a network-based reinforcement learning strategy used in the control system of the animats. We test populations of animats with imitation against populations without for their ability to find, and maintain over generations, successful foraging behavior in an environment containing three necessary resources: food, water, and shelter. We conclude that even simple imitation mechanisms are effective at increasing the frequency of success when measured over time and over populations of animats.

Author(s):  
Marco Boaretto ◽  
Gabriel Chaves Becchi ◽  
Luiza Scapinello Aquino ◽  
Aderson Cleber Pifer ◽  
Helon Vicente Hultmann Ayala ◽  
...  

Processes ◽  
2021 ◽  
Vol 9 (3) ◽  
pp. 487
Author(s):  
Fumitake Fujii ◽  
Akinori Kaneishi ◽  
Takafumi Nii ◽  
Ryu’ichiro Maenishi ◽  
Soma Tanaka

Proportional–integral–derivative (PID) control remains the primary choice for industrial process control problems. However, owing to the increased complexity and precision requirement of current industrial processes, a conventional PID controller may provide only unsatisfactory performance, or the determination of PID gains may become quite difficult. To address these issues, studies have suggested the use of reinforcement learning in combination with PID control laws. The present study aims to extend this idea to the control of a multiple-input multiple-output (MIMO) process that suffers from both physical coupling between inputs and a long input/output lag. We specifically target a thin film production process as an example of such a MIMO process and propose a self-tuning two-degree-of-freedom PI controller for the film thickness control problem. Theoretically, the self-tuning functionality of the proposed control system is based on the actor-critic reinforcement learning algorithm. We also propose a method to compensate for the input coupling. Numerical simulations are conducted under several likely scenarios to demonstrate the enhanced control performance relative to that of a conventional static gain PI controller.


2021 ◽  
Vol 54 (3-4) ◽  
pp. 417-428
Author(s):  
Yanyan Dai ◽  
KiDong Lee ◽  
SukGyu Lee

For real applications, rotary inverted pendulum systems have been known as the basic model in nonlinear control systems. If researchers have no deep understanding of control, it is difficult to control a rotary inverted pendulum platform using classic control engineering models, as shown in section 2.1. Therefore, without classic control theory, this paper controls the platform by training and testing reinforcement learning algorithm. Many recent achievements in reinforcement learning (RL) have become possible, but there is a lack of research to quickly test high-frequency RL algorithms using real hardware environment. In this paper, we propose a real-time Hardware-in-the-loop (HIL) control system to train and test the deep reinforcement learning algorithm from simulation to real hardware implementation. The Double Deep Q-Network (DDQN) with prioritized experience replay reinforcement learning algorithm, without a deep understanding of classical control engineering, is used to implement the agent. For the real experiment, to swing up the rotary inverted pendulum and make the pendulum smoothly move, we define 21 actions to swing up and balance the pendulum. Comparing Deep Q-Network (DQN), the DDQN with prioritized experience replay algorithm removes the overestimate of Q value and decreases the training time. Finally, this paper shows the experiment results with comparisons of classic control theory and different reinforcement learning algorithms.


Author(s):  
Seyed Mohammad Jafar Jalali ◽  
Gerardo J. Osorio ◽  
Sajad Ahmadian ◽  
Mohamed Lotfi ◽  
Vasco Campos ◽  
...  

2019 ◽  
Author(s):  
Allison Letkiewicz ◽  
Amy L. Cochran ◽  
Josh M. Cisler

Trauma and trauma-related disorders are characterized by altered learning styles. Two learning processes that have been delineated using computational modeling are model-free and model-based reinforcement learning (RL), characterized by trial and error and goal-driven, rule-based learning, respectively. Prior research suggests that model-free RL is disrupted among individuals with a history of assaultive trauma and may contribute to altered fear responding. Currently, it is unclear whether model-based RL, which involves building abstract and nuanced representations of stimulus-outcome relationships to prospectively predict action-related outcomes, is also impaired among individuals who have experienced trauma. The present study sought to test the hypothesis of impaired model-based RL among adolescent females exposed to assaultive trauma. Participants (n=60) completed a three-arm bandit RL task during fMRI acquisition. Two computational models compared the degree to which each participant’s task behavior fit the use of a model-free versus model-based RL strategy. Overall, a greater portion of participants’ behavior was better captured by the model-based than model-free RL model. Although assaultive trauma did not predict learning strategy use, greater sexual abuse severity predicted less use of model-based compared to model-free RL. Additionally, severe sexual abuse predicted less left frontoparietal network encoding of model-based RL updates, which was not accounted for by PTSD. Given the significant impact that sexual trauma has on mental health and other aspects of functioning, it is plausible that altered model-based RL is an important route through which clinical impairment emerges.


Sign in / Sign up

Export Citation Format

Share Document