scholarly journals Research of rectal dynamic function diagnosis based on FastICA-STFT

2018 ◽  
Vol 12 (8) ◽  
pp. 965-969 ◽  
Author(s):  
Peng Zan ◽  
Yankai Liu ◽  
Meihan Chang
Keyword(s):  
2019 ◽  
Vol 23 (1) ◽  
pp. 28-40 ◽  
Author(s):  
Yong Dou ◽  
Kiran Dhatt-Gauthier ◽  
Kyle J.M. Bishop
Keyword(s):  

2021 ◽  
Author(s):  
Agnieszka Tymula ◽  
Yuri Imaizumi ◽  
Takashi Kawai ◽  
Jun Kunimatsu ◽  
Masayuki Matsumoto ◽  
...  

Research in behavioral economics and reinforcement learning has given rise to two influential theories describing human economic choice under uncertainty. The first, prospect theory, assumes that decision-makers use static mathematical functions, utility and probability weighting, to calculate the values of alternatives. The second, reinforcement learning theory, posits that dynamic mathematical functions update the values of alternatives based on experience through reward prediction error (RPE). To date, these theories have been examined in isolation without reference to one another. Therefore, it remains unclear whether RPE affects a decision-maker's utility and/or probability weighting functions, or whether these functions are indeed static as in prospect theory. Here, we propose a dynamic prospect theory model that combines prospect theory and RPE, and test this combined model using choice data on gambling behavior of captive macaques. We found that under standard prospect theory, monkeys, like humans, had a concave utility function. Unlike humans, monkeys exhibited a concave, rather than inverse-S shaped, probability weighting function. Our dynamic prospect theory model revealed that probability distortions, not the utility of rewards, solely and systematically varied with RPE: after a positive RPE, the estimated probability weighting functions became more concave, suggesting more optimistic belief about receiving rewards and over-weighted subjective probabilities at all probability levels. Thus, the probability perceptions in laboratory monkeys are not static even after extensive training, and are governed by a dynamic function well captured by the algorithmic feature of reinforcement learning. This novel evidence supports combining these two major theories to capture choice behavior under uncertainty.


2007 ◽  
Vol 49 (3) ◽  
Author(s):  
Alexander Thomas ◽  
Jürgen Becker

A new dynamic reconfigurable hardware architecture, the HoneyComb, will be introduced that has been developed within the project AMURHA. New integrated features enhance the flexibility and usability of future array-based systems by exploring the new approach.


Sign in / Sign up

Export Citation Format

Share Document