Evolution of reinforcement learning in foraging bees: a simple explanation for risk averse behavior

2002 ◽  
Vol 44-46 ◽  
pp. 951-956 ◽  
Author(s):  
Yael Niv ◽  
Daphna Joel ◽  
Isaac Meilijson ◽  
Eytan Ruppin
2020 ◽  
Author(s):  
Edoardo Vittori ◽  
Michele Trapletti ◽  
Marcello Restelli

2002 ◽  
Vol 10 (1) ◽  
pp. 5-24 ◽  
Author(s):  
Yael Niv ◽  
Daphna Joel ◽  
Isaac Meilijson ◽  
Eytan Ruppin

Reinforcement learning is a fundamental process by which organisms learn to achieve goals from their interactions with the environment. Using evolutionary computation techniques we evolve (near-)optimal neuronal learning rules in a simple neural network model of reinforcement learning in bumblebees foraging for nectar. The resulting neural networks exhibit efficient reinforcement learning, allowing the bees to respond rapidly to changes in reward contingencies. The evolved synaptic plasticity dynamics give rise to varying exploration/exploitation levels and to the well-documented choice strategies of risk aversion and probability matching. Additionally, risk aversion is shown to emerge even when bees are evolved in a completely risk-less environment. In contrast to existing theories in economics and game theory, risk-averse behavior is shown to be a direct consequence of (near-)optimal reinforcement learning, without requiring additional assumptions such as the existence of a nonlinear subjective utility function for rewards. Our results are corroborated by a rigorous mathematical analysis, and their robustness in real-world situations is supported by experiments in a mobile robot. Thus we provide a biologically founded, parsimonious, and novel explanation for risk aversion and probability matching.


Sign in / Sign up

Export Citation Format

Share Document