operant behavior
Recently Published Documents


TOTAL DOCUMENTS

550
(FIVE YEARS 31)

H-INDEX

41
(FIVE YEARS 1)

2022 ◽  
Author(s):  
Werner K. Honig ◽  
J. E. R. Staddon
Keyword(s):  

2022 ◽  
pp. 53-97
Author(s):  
Barry Schwartz ◽  
Elkan Gamzu
Keyword(s):  

2021 ◽  
Vol 48 (9) ◽  
pp. 1623-1630
Author(s):  
E. P. Murtazina ◽  
I. S. Buyanova ◽  
Yu. A. Ginzburg-Shik

Author(s):  
Rodolfo Bernal-Gamboa ◽  
Tere A. Mason ◽  
Javier Nieto ◽  
A. Matías Gámez
Keyword(s):  

2021 ◽  
Vol 74 ◽  
pp. 101728
Author(s):  
Carolyn M. Ritchey ◽  
Toshikazu Kuroda ◽  
Jillian M. Rung ◽  
Christopher A. Podlesnik

eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Bridget Alexandra Matikainen-Ankney ◽  
Thomas Earnest ◽  
Mohamed Ali ◽  
Eric Casey ◽  
Justin G Wang ◽  
...  

Feeding is critical for survival and disruption in the mechanisms that govern food intake underlie disorders such as obesity and anorexia nervosa. It is important to understand both food intake and food motivation to reveal mechanisms underlying feeding disorders. Operant behavioral testing can be used to measure the motivational component to feeding, but most food intake monitoring systems do not measure operant behavior. Here, we present a new solution for monitoring both food intake and motivation in rodent home-cages: The Feeding Experimentation Device version 3 (FED3). FED3 measures food intake and operant behavior in rodent home-cages, enabling longitudinal studies of feeding behavior with minimal experimenter intervention. It has a programmable output for synchronizing behavior with optogenetic stimulation or neural recordings. Finally, FED3 design files are open-source and freely available, allowing researchers to modify FED3 to suit their needs.


2021 ◽  
Author(s):  
Daniel Bennett ◽  
Yael Niv ◽  
Angela Langdon

Reinforcement learning is a powerful framework for modelling the cognitive and neural substrates of learning and decision making. Contemporary research in cognitive neuroscience and neuroeconomics typically uses value-based reinforcement-learning models, which assume that decision-makers choose by comparing learned values for different actions. However, another possibility is suggested by a simpler family of models, called policy-gradient reinforcement learning. Policy-gradient models learn by optimizing a behavioral policy directly, without the intermediate step of value-learning. Here we review recent behavioral and neural findings that are more parsimoniously explained by policy-gradient models than by value-based models. We conclude that, despite the ubiquity of `value' in reinforcement-learning models of decision making, policy-gradient models provide a lightweight and compelling alternative model of operant behavior.


Sign in / Sign up

Export Citation Format

Share Document