reinforcement history
Recently Published Documents


TOTAL DOCUMENTS

59
(FIVE YEARS 5)

H-INDEX

15
(FIVE YEARS 1)

2021 ◽  
Vol 158 ◽  
pp. 108004
Author(s):  
Oren Griffiths ◽  
O. Scott Gwinn ◽  
Salvatore Russo ◽  
Irina Baetu ◽  
Michael E.R. Nicholls

Author(s):  
Michelle R. Doyle ◽  
Agnieszka Sulima ◽  
Kenner C. Rice ◽  
Gregory T. Collins

2020 ◽  
Author(s):  
C. K. Jonas Chan ◽  
Justin Harris

Pavlovian conditioning is sensitive to the temporal relationship between conditioned stimulus (CS) and unconditioned stimulus (US). This has motivated models that describe learning as a process that continuously updates associative strength during the trial or specifically encodes the CS-US interval. These models predict that extinction of responding is also continuous, such that response loss is proportional to the cumulative duration of exposure to the CS without the US. We review evidence showing that this prediction is incorrect, and that extinction is trial-based rather than time-based. We also present two experiments that test the importance of trials versus time on the Partial Reinforcement Extinction Effect (PREE), in which responding extinguishes more slowly for a CS that was inconsistently reinforced with the US than for a consistently reinforced one. We show that increasing the number of extinction trials of the partially reinforced CS, relative to the consistently reinforced CS, overcomes the PREE. However, increasing the duration of extinction trials by the same amount does not overcome the PREE. We conclude that animals learn about the likelihood of the US per trial during conditioning, and learn trial-by-trial about the absence of the US during extinction. Moreover, what they learn about the likelihood of the US during conditioning affects how sensitive they are to the absence of the US during extinction.


2020 ◽  
pp. e12904 ◽  
Author(s):  
Michelle R. Doyle ◽  
Agnieszka Sulima ◽  
Kenner C. Rice ◽  
Gregory T. Collins

2019 ◽  
Author(s):  
Justin Harris

Many theories of conditioning describe learning as a process by which stored information about the relationship between a conditioned stimulus (CS) and unconditioned stimulus (US) is progressively updated upon each occasion (trial) that the CS occurs with, or without, the US. These simple trial-based descriptions can provide a powerful and efficient means of extracting information about the correlation between two events, but they fail to explain how animals learn about the timing of events. This failure has motivated models of conditioning in which animals learn continuously, either by explicitly representing temporal intervals between events, or by sequentially updating an array of associations between temporally distributed elements of the CS and US. Here, I review evidence that some aspects of conditioning are not the consequence of a continuous learning process but reflect a trial-based process. In particular, the way that animals learn about the absence of a predicted US during extinction suggests that they encode and remember trials as single complete episodes rather than as a continuous experience of unfulfilled expectation of the US. These memories allow the animal to recognise repeated instances of non-reinforcement and encode these as a sequence which, in the case of a partial reinforcement schedule, can become associated with the US. The animal is thus able to remember details about the pattern of a CS’s reinforcement history, information that affects how long the animal continues to respond to the CS when all reinforcement ceases.


2018 ◽  
Vol 32 (S1) ◽  
Author(s):  
Michelle R. Doyle ◽  
Agnieszka Sulima ◽  
Kenner C. Rice ◽  
Gregory T. Collins

2016 ◽  
Vol 4 (2) ◽  
pp. 147-166 ◽  
Author(s):  
Aaron P. Smith ◽  
Jennifer R. Peterson ◽  
Kimberly Kirkpatrick

Despite considerable interest in impulsive choice as a predictor of a variety of maladaptive behaviors, the mechanisms that drive choice behavior are still poorly understood. The present study sought to examine the influence of one understudied variable, reward magnitude contrast, on choice and timing behavior as changes in magnitude commonly occur within choice procedures. In addition, assessments of indirect effects on choice behavior through magnitude-timing interactions were assessed by measuring timing within the choice task. Rats were exposed to choice procedures composed of different pairs of magnitudes of rewards for either the smaller-sooner (SS) or larger-later (LL) option. In Phase 2, the magnitude of reward either increased or decreased by one pellet in different groups (LL increase = 1v1 → 1v2; SS decrease = 2v2 → 1v2; SS increase = 1v2 → 2v2), followed by a return to baseline in Phase 3. Choice behavior was affected by the initial magnitudes experienced in the task, an anchor effect. The nature of the change in magnitude affected choice behavior as well. Timing behavior was also affected by the reward contrast manipulation albeit to a lesser degree and the timing and choice effects were correlated. The results suggest that models of choice behavior should incorporate reinforcement history, reward contrast elements, and magnitude-timing interactions, but that direct effects of reward contrast on choice should be given more weight than the indirect reward-timing interactions. A better understanding of the factors that contribute to choice behavior could supply key insights into this important individual differences variable.


Sign in / Sign up

Export Citation Format

Share Document