scholarly journals Pavlovian conditioning under partial reinforcement: The effects of non-reinforced trials versus cumulative CS duration

2020 ◽  
Author(s):  
Justin Harris ◽  
Mark Bouton

A core feature of associative models, such as those proposed by Allan Wagner (Rescorla & Wagner, 1972; Wagner, 1981), is that conditioning proceeds in a trial-by-trial fashion, with increments and decrements in associative strength occurring on each occasion that the conditioned stimulus (CS) is present either with or without the unconditioned stimulus (US). A very different approach has been taken by theories that assume animals continuously accumulate information about the total length of time spent waiting for the US both during the CS and in the absence of the CS (e.g., Gallistel & Gibbon, 2000). Here we describe three experiments using within-subject designs that tested between trial-based and time-accumulation accounts of the acquisition of conditioned responding using magazine approach conditioning in rats. We found that responding was affected by the total (cumulative) duration of exposure to the CS without the US rather than the number of trials on which the CS occurred without the US. We also found that exposure to the CS without the US had the same effect on conditioning whether that exposure occurred shortly (60 s) before each CS-US pairing or whether it occurred long (240 s) before each pairing. These findings are more consistent with time-accumulation models of conditioning than trial-based models like the Rescorla-Wagner model and Wagner’s (1981) Sometimes Opponent Process model. We discuss these findings in relation to other evidence that favours trial-based models rather than time-accumulation models.

2020 ◽  
Author(s):  
C. K. Jonas Chan ◽  
Justin Harris

Pavlovian conditioning is sensitive to the temporal relationship between conditioned stimulus (CS) and unconditioned stimulus (US). This has motivated models that describe learning as a process that continuously updates associative strength during the trial or specifically encodes the CS-US interval. These models predict that extinction of responding is also continuous, such that response loss is proportional to the cumulative duration of exposure to the CS without the US. We review evidence showing that this prediction is incorrect, and that extinction is trial-based rather than time-based. We also present two experiments that test the importance of trials versus time on the Partial Reinforcement Extinction Effect (PREE), in which responding extinguishes more slowly for a CS that was inconsistently reinforced with the US than for a consistently reinforced one. We show that increasing the number of extinction trials of the partially reinforced CS, relative to the consistently reinforced CS, overcomes the PREE. However, increasing the duration of extinction trials by the same amount does not overcome the PREE. We conclude that animals learn about the likelihood of the US per trial during conditioning, and learn trial-by-trial about the absence of the US during extinction. Moreover, what they learn about the likelihood of the US during conditioning affects how sensitive they are to the absence of the US during extinction.


2019 ◽  
Author(s):  
Justin Harris

Many theories of conditioning describe learning as a process by which stored information about the relationship between a conditioned stimulus (CS) and unconditioned stimulus (US) is progressively updated upon each occasion (trial) that the CS occurs with, or without, the US. These simple trial-based descriptions can provide a powerful and efficient means of extracting information about the correlation between two events, but they fail to explain how animals learn about the timing of events. This failure has motivated models of conditioning in which animals learn continuously, either by explicitly representing temporal intervals between events, or by sequentially updating an array of associations between temporally distributed elements of the CS and US. Here, I review evidence that some aspects of conditioning are not the consequence of a continuous learning process but reflect a trial-based process. In particular, the way that animals learn about the absence of a predicted US during extinction suggests that they encode and remember trials as single complete episodes rather than as a continuous experience of unfulfilled expectation of the US. These memories allow the animal to recognise repeated instances of non-reinforcement and encode these as a sequence which, in the case of a partial reinforcement schedule, can become associated with the US. The animal is thus able to remember details about the pattern of a CS’s reinforcement history, information that affects how long the animal continues to respond to the CS when all reinforcement ceases.


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A74-A74
Author(s):  
J Choynowski ◽  
M Pirner ◽  
C Mickelson ◽  
J Mantua ◽  
W J Sowden ◽  
...  

Abstract Introduction U.S. Army Reserve Officer Training Corps (ROTC) Cadets are college students training to be Army Officers. During a month-long capstone course (Advanced Camp), Cadets are rated on their leadership ability. Little work has been done to determine predictors of leadership ability at Advanced Camp. This study examined the effect of poor sleep and mood disorders -- two prevalent factors among college students -- on leadership ability. Methods Metrics on leadership, sleep quality, anxiety, and depression, were assessed in 159 ROTC Cadets (22.06±2.49 years; 23.90%female) at Days 1 (Baseline), 14 (Mid), and 29 (Post) of Advanced Camp. Leadership ratings were determined by ROTC Instructors over the course of Advanced Camp (1–5 score; higher score indicates poorer leadership). Predictors were the Pittsburgh Sleep Quality Index, Generalized Anxiety Disorder-7, and Patient Health Questionnaire-9. The relationships between the predictors and leadership scores were tested using linear regression. The interaction between mood disorders and sleep quality on leadership was tested using SPSS Process (Model 1). Results Poorer sleep quality at the Post time point (reflecting the prior 2 weeks of sleep) predicted poorer leadership (B=.05,p=.03), while sleep quality from Baseline (B=.03,p=.14) and Mid (B=.01,p=.67) did not. Higher anxiety and depression scores from all time points predicted poorer leadership (p-values<.03). There was an interaction: higher anxiety and high depression predicted poorer leadership only in the context of poor sleep quality (not good or average sleep quality) [anxiety: R2=.04,F(1,159)=6.04,p=.02; interaction: R2=.03,F(1,155)=5.30,p=.02]. Conclusion The current study identified a relationship between sleep quality and leadership ratings in ROTC cadets. This relationship was moderated by anxiety and depression. ROTC instructors should encourage ROTC Cadets to take advantage of sleep opportunities at Advanced Camp in order to maximize leadership potential. Support Support for this study came from the Military Operational Medicine Research Program (MOMRP) of the United States Army Medical Research and Development Command (USAMRDC). Disclaimer: The opinions and assertions contained herein are the private views of the authors and are not to be construed as official or as reflecting the views of the US Army or of the US


2019 ◽  
Vol 2019 ◽  
pp. 1-6
Author(s):  
Erich Ritter ◽  
Raid Amin ◽  
Kevin Cahn ◽  
Jonathan Lee

The trends of the world’s top ten countries relating to shark bite rates, defined as the ratio of the annual number of shark bites of a country and its resident human population, were analyzed for the period 2000-2016. A nonparametric permutation-based methodology was used to determine whether the slope of the regression line of a country remained constant over time or whether so-called joinpoints, a core feature of the statistical software Joinpoint, occurred, at which the slope changes and a better fit could be obtained by applying a straight-line model. More than 90% of all shark bite incidents occurred along the US, Australia, South Africa, and New Zealand coasts. Since three of these coasts showed a negative trend when transformed into bite rates, the overall global trend is decreasing. Potential reasons for this decrease in shark bite rates—besides an increase in the world’s human population, resulting in more beach going people, and a decrease of sharks due to overfishing—are discussed.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S556-S557
Author(s):  
Verena R Cimarolli ◽  
Amy Horowitz ◽  
Rachel Pruchno

Abstract Long-distance caregiving (LDC) is a growing phenomenon and common experience for caregivers of frail older adults. In fact, 11% of family caregivers in the US live more than two hours distance from the care recipient (CR). Unfortunately, there is a paucity of research on unique experiences of LDCs and the impact of LDC on the mental health of LDCs. This symposium presents findings from the NIA funded Fordham Long-Distance Caregiving Study (R21AG050018) analyzing data of 304 long-distance caregivers (LDCs). The overall study goal was to better understand how LDCs deal with the structural constraint of distance, and to examine LDC consequences and resources. First, Horowitz presents the study background, characteristics of the sample, and provides a description of the unique experiences of LDCs. Next, Cimarolli concentrates on the Sociocultural Stress Process Model applied to LDC. Her study tested the impact of LDC on mental health and investigated resources (e.g., coping skills) which could mediate the association between caregiving stressors and mental health outcomes. The third paper (Falzarano) presents data related to satisfaction with formal service providers for four subgroups of LDCs based on CR residence and dementia status. Finally, Jimenez focuses on the characteristics of LDCs’ network of other informal caregivers (IC) providing assistance to the CR and factors are associated with more help received from other ICs. Dr. Pruchno, an expert in caregiving research, will discuss study findings. The symposium provides insights into unique experiences of LDCs, the impact of LDC on mental health, and resource use among LDCs.


2020 ◽  
Vol 5 (6) ◽  
pp. 47
Author(s):  
Mohammad Ilbeigi ◽  
Bhushan Pawar

The US Department of Transportation and Federal Highway Administration require routine inspections to monitor bridge deterioration. Typically, bridge inspections are conducted every 24 months. This timeframe was determined solely based on engineering judgment. The objective of this study is to develop probabilistic models to forecast bridge deterioration and statistically determine the optimal inspection intervals. A two-dimensional Markov process model that considers the current condition of a bridge, and number of years that the bridge has been in that condition, is created to predict future bridge conditions based on historical data. Using the forecasting model, a statistical process is developed to determine the optimal inspection intervals. The proposed methodology in this study is implemented, utilizing a dataset consisting of information about deterioration conditions of more than 17,500 bridges in the state of New York from 1992 to 2018. The outcomes of the statistical analysis indicate that the typical 24-month inspection interval is considerably pessimistic, and not necessary for all bridges currently in condition 5 or higher. However, the 24-month interval is too optimistic and risky for bridges currently in condition 4 or lower. The outcomes of this study help bridge owners and transportation agencies assign maintenance resources efficiently, and invest the millions of dollars currently allocated for unnecessary inspections in much-needed infrastructure development projects.


2013 ◽  
Vol 45 (6) ◽  
pp. 667-679 ◽  
Author(s):  
James B. Elsner ◽  
Richard J. Murnane ◽  
Thomas H. Jagger ◽  
Holly M. Widen

2018 ◽  
Author(s):  
Jayne Morriss ◽  
Carien M. van Reekum

AbstractExtinction-resistant threat is considered to be a central feature of pathological anxiety. Reduced threat extinction is observed in individuals with high intolerance of uncertainty (IU). Here we sought to determine whether contingency instructions could alter the course threat extinction for individuals high in IU. We tested this hypothesis in two identical experiments (Exp 1 n = 60, Exp 2 n = 82) where we recorded electrodermal activity during threat acquisition with partial reinforcement, and extinction. Participants were split into groups based on extinction instructions (instructed, uninstructed) and IU score (low, high). All groups displayed larger skin conductance responses to learned threat versus safety cues during threat acquisition, indicative of threat conditioning. In both experiments, only the uninstructed high IU groups displayed larger skin conductance responses to the learned threat versus safety cue during threat extinction. These findings suggest that uncertain threat during extinction maintains conditioned responding in individuals high in IU.


2019 ◽  
Author(s):  
John Morris ◽  
Francois Windels ◽  
Pankaj Sah

AbstractThe partial reinforcement extinction effect (PREE) is a paradoxical learning phenomenon in which omission of reinforcement during acquisition results in more persistent conditioned responding in extinction. Here, we report a significant PREE with an inverted-U, entropy-like distribution against reinforcement probability following tone foot shock fear conditioning in rats, which was associated with increased neural activity in hippocampus and amygdala as indexed by p-ERK and c-fos immunolabelling. In vivo electrophysiological recordings of local field potentials (LFPs) showed that 50% reinforcement was associated with increases in the frequency and power of tone-evoked theta oscillations in both the subiculum region of hippocampus and in basolateral amygdala (BLA) during both acquisition (Day 1) and extinction (Day 2) sessions. Tone-evoked LFPs in 50% reinforced animals also showed increases in coherence and bidirectional Granger Causality between hippocampus and amygdala. The results support a Bayesian interpretation of the PREE, in which the phenomenon is driven by increases in the entropy or uncertainty of stimulus contingencies, and indicate a crucial role for hippocampus in mediating this uncertainty-dependent effect.


Sign in / Sign up

Export Citation Format

Share Document