reinforcer magnitude
Recently Published Documents


TOTAL DOCUMENTS

83
(FIVE YEARS 10)

H-INDEX

19
(FIVE YEARS 1)

2021 ◽  
Author(s):  
◽  
Jared Pickett

<p>People make different decisions when they know the odds of an event occurring, (e.g. told 10% chance of an earthquake that year) than when they draw on only their own experience (e.g. living in a city with, on average, one earthquake every 10 years). It may be that when we make decisions based on our past experience (decisions from experience) we are more likely to choose a risky option when it can lead to the biggest win and avoid it when it can lead to the biggest loss, this effect is called the Extreme-Outcome rule. Across three Experiments we tested the Extreme-Outcome rule by having participants make repeated choices between either safe or risky options which had the same expected value. In each experiment, we varied the magnitude of the reinforcer’s participants could win in both an Experience condition and a condition that had both description and experience information. In Experiment 1 where we had two reinforcer sizes (small and large) we found an Extreme-Outcome effect in the Experience condition, but not the Description-Experience condition. In Experiment 2 we tested a prediction of the Extreme-Outcome rule that participants would be sensitive to the best and worst outcome by adding another reinforcer size (reinforcers were small, medium and large) and therefore on some trials neither alternative included an extreme outcome. We also removed zero as a potential outcome to investigate whether zero aversion might be driving the effect of reinforcer magnitude in the Experience condition. We did not find response patterns consistent with an Extreme-Outcome rule in the Experience condition. Instead, participants were least risk seeking when the reinforcer was small, but there was no difference in levels of risk seeking between the medium and large reinforcer trials. In other words, there was an effect of the low-extreme outcome but not the high-extreme outcome. Like Experiment 1, in the Description-Experience condition risk preference was not influenced by reinforcer size, but the absolute levels were higher. To investigate whether this increase in risk preference was due to removing the zero, in Experiment 3 we manipulated whether zero was present or absent. When zero was absent, risk preference was not influenced by the size of the reinforcer in the Description-Experience condition, but there was an effect of the low-extreme outcome when zero was present. We also found an effect of the low extreme outcome in the Experience condition regardless of whether zero was present or absent. Overall, these findings suggest the Extreme-Outcome rule needs to be modified to take into account the effect of the low extreme but not the high extreme outcome.</p>


2021 ◽  
Author(s):  
◽  
Jared Pickett

<p>People make different decisions when they know the odds of an event occurring, (e.g. told 10% chance of an earthquake that year) than when they draw on only their own experience (e.g. living in a city with, on average, one earthquake every 10 years). It may be that when we make decisions based on our past experience (decisions from experience) we are more likely to choose a risky option when it can lead to the biggest win and avoid it when it can lead to the biggest loss, this effect is called the Extreme-Outcome rule. Across three Experiments we tested the Extreme-Outcome rule by having participants make repeated choices between either safe or risky options which had the same expected value. In each experiment, we varied the magnitude of the reinforcer’s participants could win in both an Experience condition and a condition that had both description and experience information. In Experiment 1 where we had two reinforcer sizes (small and large) we found an Extreme-Outcome effect in the Experience condition, but not the Description-Experience condition. In Experiment 2 we tested a prediction of the Extreme-Outcome rule that participants would be sensitive to the best and worst outcome by adding another reinforcer size (reinforcers were small, medium and large) and therefore on some trials neither alternative included an extreme outcome. We also removed zero as a potential outcome to investigate whether zero aversion might be driving the effect of reinforcer magnitude in the Experience condition. We did not find response patterns consistent with an Extreme-Outcome rule in the Experience condition. Instead, participants were least risk seeking when the reinforcer was small, but there was no difference in levels of risk seeking between the medium and large reinforcer trials. In other words, there was an effect of the low-extreme outcome but not the high-extreme outcome. Like Experiment 1, in the Description-Experience condition risk preference was not influenced by reinforcer size, but the absolute levels were higher. To investigate whether this increase in risk preference was due to removing the zero, in Experiment 3 we manipulated whether zero was present or absent. When zero was absent, risk preference was not influenced by the size of the reinforcer in the Description-Experience condition, but there was an effect of the low-extreme outcome when zero was present. We also found an effect of the low extreme outcome in the Experience condition regardless of whether zero was present or absent. Overall, these findings suggest the Extreme-Outcome rule needs to be modified to take into account the effect of the low extreme but not the high extreme outcome.</p>


2020 ◽  
Vol 53 (3) ◽  
pp. 1514-1530
Author(s):  
Jacqueline P. Rogalski ◽  
Eileen M. Roscoe ◽  
Daniel W. Fredericks ◽  
Nabil Mezhoudi

2019 ◽  
Vol 45 (2) ◽  
pp. 519-546
Author(s):  
Raymond C. Pitts ◽  
Christine E. Hughes ◽  
Dean C. Williams

Pigeons key pecked under two-component multiple fixed-interval (FI) schedules. Each component provided a different reinforcer magnitude (small or large), signaled by the color of the key light. Attacks toward a live, protected target pigeon were measured. Large- (rich) and small- (lean) reinforcer components alternated irregularly such that four different interval types (transitions) between the size of the immediately preceding reinforcer and the size of the upcoming reinforcer occurred within each session: lean-to-lean, lean-to-rich, rich-to-lean, and rich-torich transitions. The FI for each component was the same within each phase, but was manipulated across phases. For all pigeons, more attack occurred following the presentations of the larger reinforcer (i.e., during rich-to-lean and rich-to-rich transitions). For 2 of the 3 pigeons, this effect was modulated by the size of the upcoming reinforcer; attack following larger reinforcers was elevated when the upcoming reinforcer was small (i.e., during rich-to-lean transitions). This rich-to-lean effect on attack diminished or disappeared as the length of the FI schedule was increased (i.e., control over attack by the upcoming reinforcer size diminished with increases in the inter-reinforcement interval). For all pigeons and at all FIs, however, postreinforcement pauses were longest during the rich-to-lean transitions. These data (1) are consistent with the notion that postreinforcement periods during intermittent schedules function aversively and, thus, can precipitate aggressive behavior, and (2) suggest that rich-to-lean conditions may be especially aversive. They also indicate, however, that aversive effects of rich-to-lean transitions may differ across fixed-ratio (FR) and FI schedules, and that variables controlling attacking and pausing may not be isomorphic between these different schedule types.


2019 ◽  
Vol 45 (2) ◽  
pp. 500-518
Author(s):  
Dean C. Williams ◽  
Yusuke Hayashi ◽  
Adam Brewer ◽  
Kathryn J. Saunders ◽  
Stephen Fowler ◽  
...  

Two pigeons key pecked under a two-component multiple fixed-ratio (FR) FR schedule. Each component provided a different reinforcer magnitude (small or large) thatwhich was signaled by the color of the key light. Large- (rich) and small- (lean) reinforcer components randomly alternated to produce four different types of transitions between the size of the immediately preceding reinforcer and the size of the upcoming reinforcer: lean-to-lean, lean-to-rich, rich-to-lean, and rich-to-rich. During probe sessions, a mirror (which was covered during baseline sessions) was uncovered and attack responses toward the mirror were measured, along with the force of individual mirror attacks. The pigeons paused the longest, and attacked most frequently during the rich-to-lean transitions. The pigeons also exhibited some attacksh during lean-to-lean transitions, and pauses were longer during these transitions than during the lean-to-rich and rich-to-rich transitions. Pauses were short and attack infrequent during these last two transition types. In addition, attacks were more forceful during the rich-to-lean transitions thaen during the other transition types. These data are consistent with the view that rich-to-lean transitions function aversively and, as such, generate behavior patterns, including aggression, commonly produced by other aversive stimuli.


2019 ◽  
Vol 43 (6) ◽  
pp. 774-789 ◽  
Author(s):  
Raechal H. Ferguson ◽  
Terry S. Falcomata ◽  
Andrea Ramirez-Cristoforo ◽  
Fabiola Vargas Londono

Interventions aimed at increasing communicative response variability hold particular importance for individuals with autism spectrum disorders (ASD). Several procedures have been demonstrated in the applied and translational literature to increase response variability. However, little is known about the relationship between reinforcer magnitude and response variability. In the basic literature, Doughty, Giorno, and Miller evaluated the effects of reinforcer magnitude on behavioral variability by manipulating reinforcer magnitude across alternating relative frequency threshold contingencies, with results suggesting that larger reinforcers induced repetitive responding. The purpose of this study was to translate Doughty et al.’s findings to evaluate the relative effects of different magnitudes of reinforcement on communicative response variability in children with ASD. A Lag 1 schedule of reinforcement was in place during each condition within an alternating treatments design. Magnitudes of reinforcement contingent on variable communicative responding were manipulated across the two conditions. Inconsistent with basic findings, the results showed higher levels of variable communicative responding associated with the larger magnitude of reinforcement. These outcomes may have potential implications for interventions aimed at increasing response variability in individuals with ASD, as well as future research in this area.


Sign in / Sign up

Export Citation Format

Share Document