Neural Substrates of Changepoint Detection and Reinforcement Learning in Foraging Behavior

2011 ◽  
Author(s):  
Wolfgang M. Pauli ◽  
Matt Jones
2010 ◽  
Vol 16 (1) ◽  
pp. 21-37 ◽  
Author(s):  
Chris Marriott ◽  
James Parker ◽  
Jörg Denzinger

We study the effects of an imitation mechanism on a population of animats capable of individual ontogenetic learning. An urge to imitate others augments a network-based reinforcement learning strategy used in the control system of the animats. We test populations of animats with imitation against populations without for their ability to find, and maintain over generations, successful foraging behavior in an environment containing three necessary resources: food, water, and shelter. We conclude that even simple imitation mechanisms are effective at increasing the frequency of success when measured over time and over populations of animats.


2021 ◽  
Vol 11 (6) ◽  
pp. 2856
Author(s):  
Fidel Aznar ◽  
Mar Pujol ◽  
Ramón Rizo

This article presents a macroscopic swarm foraging behavior obtained using deep reinforcement learning. The selected behavior is a complex task in which a group of simple agents must be directed towards an object to move it to a target position without the use of special gripping mechanisms, using only their own bodies. Our system has been designed to use and combine basic fuzzy behaviors to control obstacle avoidance and the low-level rendezvous processes needed for the foraging task. We use a realistically modeled swarm based on differential robots equipped with light detection and ranging (LiDAR) sensors. It is important to highlight that the obtained macroscopic behavior, in contrast to that of end-to-end systems, combines existing microscopic tasks, which allows us to apply these learning techniques even with the dimensionality and complexity of the problem in a realistic robotic swarm system. The presented behavior is capable of correctly developing the macroscopic foraging task in a robust and scalable way, even in situations that have not been seen in the training phase. An exhaustive analysis of the obtained behavior is carried out, where both the movement of the swarm while performing the task and the swarm scalability are analyzed.


Author(s):  
Elliot A. Ludvig ◽  
Marc G. Bellemare ◽  
Keir G. Pearson

In the last 15 years, there has been a flourishing of research into the neural basis of reinforcement learning, drawing together insights and findings from psychology, computer science, and neuroscience. This remarkable confluence of three fields has yielded a growing framework that begins to explain how animals and humans learn to make decisions in real time. Mastering the literature in this sub-field can be quite daunting as this task can require mastery of at least three different disciplines, each with its own jargon, perspectives, and shared background knowledge. In this chapter, the authors attempt to make this fascinating line of research more accessible to researchers in any of the constitutive sub-disciplines. To this end, the authors develop a primer for reinforcement learning in the brain that lays out in plain language many of the key ideas and concepts that underpin research in this area. This primer is embedded in a literature review that aims not to be comprehensive, but rather representative of the types of questions and answers that have arisen in the quest to understand reinforcement learning and its neural substrates. Drawing on the basic findings in this research enterprise, the authors conclude with some speculations about how these developments in computational neuroscience may influence future developments in Artificial Intelligence.


2020 ◽  
Vol 25 (4) ◽  
pp. 588-595
Author(s):  
Boyin Jin ◽  
Yupeng Liang ◽  
Ziyao Han ◽  
Kazuhiro Ohkura

2021 ◽  
Author(s):  
Fei-Yang Huang ◽  
Fabian Grabenhorst

Animals make adaptive food choices to acquire nutrients that are essential for survival. In reinforcement learning (RL), animals choose by assigning values to options and update these values with new experiences. This framework has been instrumental for identifying fundamental learning and decision variables, and their neural substrates. However, canonical RL models do not explain how learning depends on biologically critical intrinsic reward components, such as nutrients, and related homeostatic regulation. Here, we investigated this question in monkeys making choices for nutrient-defined food rewards under varying reward probabilities. We found that the nutrient composition of rewards strongly influenced monkeys' choices and learning. The animals preferred rewards high in nutrient content and showed individual preferences for specific nutrients (sugar, fat). These nutrient preferences affected how the animals adapted to changing reward probabilities: the monkeys learned faster from preferred nutrient rewards and chose them frequently even when they were associated with lower reward probability. Although more recently experienced rewards generally had a stronger influence on monkeys' choices, the impact of reward history depended on the rewards' specific nutrient composition. A nutrient-sensitive RL model captured these processes. It updated the value of individual sugar and fat components of expected rewards from experience and integrated them into scalar values that explained the monkeys' choices. Our findings indicate that nutrients constitute important reward components that influence subjective valuation, learning and choice. Incorporating nutrient-value functions into RL models may enhance their biological validity and help reveal unrecognized nutrient-specific learning and decision computations.


2015 ◽  
Vol 27 (2) ◽  
pp. 319-333 ◽  
Author(s):  
A. Ross Otto ◽  
Anya Skatova ◽  
Seth Madlon-Kay ◽  
Nathaniel D. Daw

Accounts of decision-making and its neural substrates have long posited the operation of separate, competing valuation systems in the control of choice behavior. Recent theoretical and experimental work suggest that this classic distinction between behaviorally and neurally dissociable systems for habitual and goal-directed (or more generally, automatic and controlled) choice may arise from two computational strategies for reinforcement learning (RL), called model-free and model-based RL, but the cognitive or computational processes by which one system may dominate over the other in the control of behavior is a matter of ongoing investigation. To elucidate this question, we leverage the theoretical framework of cognitive control, demonstrating that individual differences in utilization of goal-related contextual information—in the service of overcoming habitual, stimulus-driven responses—in established cognitive control paradigms predict model-based behavior in a separate, sequential choice task. The behavioral correspondence between cognitive control and model-based RL compellingly suggests that a common set of processes may underpin the two behaviors. In particular, computational mechanisms originally proposed to underlie controlled behavior may be applicable to understanding the interactions between model-based and model-free choice behavior.


2011 ◽  
Vol 21 (1) ◽  
pp. 5-14
Author(s):  
Christy L. Ludlow

The premise of this article is that increased understanding of the brain bases for normal speech and voice behavior will provide a sound foundation for developing therapeutic approaches to establish or re-establish these functions. The neural substrates involved in speech/voice behaviors, the types of muscle patterning for speech and voice, the brain networks involved and their regulation, and how they can be externally modulated for improving function will be addressed.


Ecography ◽  
2000 ◽  
Vol 23 (1) ◽  
pp. 21-31 ◽  
Author(s):  
Mary E. Clark ◽  
Thomas G. Wolcott ◽  
Donna L. Wolcott ◽  
Anson H. Hines

Sign in / Sign up

Export Citation Format

Share Document