scholarly journals Generalization gradients for fear and disgust in human associative learning

2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Jinxia Wang ◽  
Xiaoying Sun ◽  
Jiachen Lu ◽  
HaoRan Dou ◽  
Yi Lei

AbstractPrevious research indicates that excessive fear is a critical feature in anxiety disorders; however, recent studies suggest that disgust may also contribute to the etiology and maintenance of some anxiety disorders. It remains unclear if differences exist between these two threat-related emotions in conditioning and generalization. Evaluating different patterns of fear and disgust learning would facilitate a deeper understanding of how anxiety disorders develop. In this study, 32 college students completed threat conditioning tasks, including conditioned stimuli paired with frightening or disgusting images. Fear and disgust were divided into two randomly ordered blocks to examine differences by recording subjective US expectancy ratings and eye movements in the conditioning and generalization process. During conditioning, differing US expectancy ratings (fear vs. disgust) were found only on CS-, which may demonstrated that fear is associated with inferior discrimination learning. During the generalization test, participants exhibited greater US expectancy ratings to fear-related GS1 (generalized stimulus) and GS2 relative to disgust GS1 and GS2. Fear led to longer reaction times than disgust in both phases, and the pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that disgust generalization has a steeper gradient than fear generalization. These findings provide preliminary evidence for differences between fear- and disgust-related stimuli in conditioning and generalization, and suggest insights into treatment for anxiety and other fear- or disgust-related disorders.

2021 ◽  
Author(s):  
Jinxia Wang ◽  
Xiaoying Sun ◽  
Jiachen Lu ◽  
Haoran Dou ◽  
Yi Lei

Abstract A major limitation of fear generalization research entails the confusing unconditional stimulus—it can often induce not only fear but also disgust. Differences between the two threat-related emotions during conditioning and generalization are currently unknown. To address this issue, 32 college students completed threat conditioning tasks including conditioned stimuli paired with fear or disgust images. A block design was used to divide fear and disgust into two randomly ordered blocks, enabling examination of differences between fear and disgust by recording subjective expectations and eye movement in the generalization process. The results revealed that participants reported larger subjective expectations of fear-related GS1 (generalized stimuli) and GS2 than disgust-related GS1 and GS2, and fear led to longer reaction times than disgust in both conditioning and generalization phases. The pupil size and fixation duration for fear stimuli were larger than for disgust stimuli, suggesting that fear generalization has a steeper gradient than disgust generalization. Participants paid more attention to fear and were more inclined to avoid disgust stimuli. These findings provide new, albeit preliminary, evidence of the differences between fear and disgust stimuli in generalization, and may offer insight into the treatment of clinical anxiety and other fear- or disgust-related disorders.


2021 ◽  
Vol 13 (13) ◽  
pp. 7463
Author(s):  
Amin Azimian ◽  
Carlos Alberto Catalina Ortega ◽  
Juan Maria Espinosa ◽  
Miguel Ángel Mariscal ◽  
Susana García-Herrero

Roundabouts are considered as one of the most efficient forms of intersection that substantially reduce the types of crashes that result in injury or loss of life. Nevertheless, they do not eliminate collision risks, especially when human error plays such a large role in traffic crashes. In this study, we used a driving simulator and an eye tracker to investigate drivers’ eye movements under cell phone-induced distraction. A total of 45 drivers participated in two experiments conducted under distracted and non-distracted conditions. The results indicated that, under distracting conditions, the drivers’ fixation duration decreased significantly on roundabouts, and pupil size increased significantly.


2018 ◽  
Author(s):  
Han Zhang ◽  
Kevin Miller ◽  
Xin Sun ◽  
Kai Schnabel Cortina

Video lectures are increasingly prevalent, but they present challenges to learners. Students' minds often wander, yet we know little about how mind-wandering affects attention during video lectures. This paper presents two studies that examined eye movement patterns of mind-wandering during video lectures. In the studies, mind-wandering reports were collected by either self-caught reports or thought probes. Results were similar across the studies: mind-wandering was associated with an increased allocation of fixations to the instructor's image. For fixations on the slides, the average duration increased but the dispersion decreased. Moreover, preliminary evidence suggested that fixation duration and dispersion can diminish soon after self-caught reports of mind-wandering. Overall, these findings help advance our understanding of how learners' attention is affected during mind-wandering and may facilitate efforts in objectively identifying mind-wandering. Future research is needed to determine if these findings can extend to other instructional formats.


2007 ◽  
Vol 60 (7) ◽  
pp. 924-935 ◽  
Author(s):  
Thomas Geyer ◽  
Adrian Von Mühlenen ◽  
Hermann J. Müller

Horowitz and Wolfe (1998, 2003) have challenged the view that serial visual search involves memory processes that keep track of already inspected locations. The present study used a search paradigm similar to Horowitz and Wolfe's (1998), comparing a standard static search condition with a dynamic condition in which display elements changed locations randomly every 111 ms. In addition to measuring search reaction times, observers’ eye movements were recorded. For target-present trials, the search rates were near-identical in the two search conditions, replicating Horowitz and Wolfe's findings. However, the number of fixations and saccade amplitude were larger in the static than in the dynamic condition, whereas fixation duration and the latency of the first saccade were longer in the dynamic condition. These results indicate that an active, memory-guided search strategy was adopted in the static condition, and a passive “sit-and-wait” strategy in the dynamic condition.


2016 ◽  
Vol 30 (3) ◽  
pp. 124-129 ◽  
Author(s):  
Anne Schienle ◽  
Sonja Übel ◽  
Andreas Gremsl ◽  
Florian Schöngassner ◽  
Christof Körner

Abstract. Disgust has been conceptualized as an emotion which promotes disease-avoidance behavior. The present eye-tracking experiment investigated whether disgust-evoking stimuli provoke specific eye movements and pupillary responses. Forty-three women viewed images depicting disgusting, fear-eliciting, neutral items and fractals while their eye movements (fixation duration and frequency, blinking rate, saccade amplitude) and pupil size were recorded. Disgust and fear ratings for the pictures as well as trait disgust and trait anxiety were assessed. The disgust pictures evoked the target emotion specifically and prompted characteristic scanning patterns. The participants made more and shorter fixations when looking at the repulsive pictures compared to all other categories. Moreover, state and trait disgust of the participants correlated negatively with their pupil size during disgust elicitation. Our data point to a disgust-specific visual exploration behavior, which possibly supports the fast identification of health-threatening aspects of a stimulus.


2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Y. Stegmann ◽  
M. A. Schiele ◽  
D. Schümann ◽  
T. B. Lonsdorf ◽  
P. Zwanzger ◽  
...  

AbstractPrevious research indicates that anxiety disorders are characterized by an overgeneralization of conditioned fear as compared with healthy participants. Therefore, fear generalization is considered a key mechanism for the development of anxiety disorders. However, systematic investigations on the variance in fear generalization are lacking. Therefore, the current study aims at identifying distinctive phenotypes of fear generalization among healthy participants. To this end, 1175 participants completed a differential fear conditioning phase followed by a generalization test. To identify patterns of fear generalization, we used a k-means clustering algorithm based on individual arousal generalization gradients. Subsequently, we examined the reliability and validity of the clusters and phenotypical differences between subgroups on the basis of psychometric data and markers of fear expression. Cluster analysis reliably revealed five clusters that systematically differed in mean responses, differentiation between conditioned threat and safety, and linearity of the generalization gradients, though mean response levels accounted for most variance. Remarkably, the patterns of mean responses were already evident during fear acquisition and corresponded most closely to psychometric measures of anxiety traits. The identified clusters reliably described subgroups of healthy individuals with distinct response characteristics in a fear generalization test. Following a dimensional view of psychopathology, these clusters likely delineate risk factors for anxiety disorders. As crucial group characteristics were already evident during fear acquisition, our results emphasize the importance of average fear responses and differentiation between conditioned threat and safety as risk factors for anxiety disorders.


2012 ◽  
Vol 17 (4) ◽  
pp. 257-265 ◽  
Author(s):  
Carmen Munk ◽  
Günter Daniel Rey ◽  
Anna Katharina Diergarten ◽  
Gerhild Nieding ◽  
Wolfgang Schneider ◽  
...  

An eye tracker experiment investigated 4-, 6-, and 8-year old children’s cognitive processing of film cuts. Nine short film sequences with or without editing errors were presented to 79 children. Eye movements up to 400 ms after the targeted film cuts were measured and analyzed using a new calculation formula based on Manhattan Metrics. No age effects were found for jump cuts (i.e., small movement discontinuities in a film). However, disturbances resulting from reversed-angle shots (i.e., a switch of the left-right position of actors in successive shots) led to increased reaction times between 6- and 8-year old children, whereas children of all age groups had difficulties coping with narrative discontinuity (i.e., the canonical chronological sequence of film actions is disrupted). Furthermore, 4-year old children showed a greater number of overall eye movements than 6- and 8-year old children. This indicates that some viewing skills are developed between 4 and 6 years of age. The results of the study provide evidence of a crucial time span of knowledge acquisition for television-based media literacy between 4 and 8 years.


2019 ◽  
Vol 19 (10) ◽  
pp. 252c
Author(s):  
Sebastiaan Mathôt ◽  
Adina Wagner ◽  
Michael Hanke

2018 ◽  
Vol 119 (1) ◽  
pp. 221-234 ◽  
Author(s):  
Yuhui Li ◽  
Yong Wang ◽  
He Cui

As a vital skill in an evolving world, interception of moving objects relies on accurate prediction of target motion. In natural circumstances, active gaze shifts often accompany hand movements when exploring targets of interest, but how eye and hand movements are coordinated during manual interception and their dependence on visual prediction remain unclear. Here, we trained gaze-unrestrained monkeys to manually intercept targets appearing at random locations and circularly moving with random speeds. We found that well-trained animals were able to intercept the targets with adequate compensation for both sensory transmission and motor delays. Before interception, the animals' gaze followed the targets with adequate compensation for the sensory delay, but not for extra target displacement during the eye movements. Both hand and eye movements were modulated by target kinematics, and their reaction times were correlated. Moreover, retinal errors and reaching errors were correlated across different stages of reach execution. Our results reveal eye-hand coordination during manual interception, yet the eye and hand movements may show different levels of prediction based on the task context. NEW & NOTEWORTHY Here we studied the eye-hand coordination of monkeys during flexible manual interception of a moving target. Eye movements were untrained and not explicitly associated with reward. We found that the initial saccades toward the moving target adequately compensated for sensory transmission delays, but not for extra target displacement, whereas the reaching arm movements fully compensated for sensorimotor delays, suggesting that the mode of eye-hand coordination strongly depends on behavioral context.


Perception ◽  
1994 ◽  
Vol 23 (1) ◽  
pp. 45-64 ◽  
Author(s):  
Monica Biscaldi ◽  
Burkhart Fischer ◽  
Franz Aiple

Twenty-four children made saccades in five noncognitive tasks. Two standard tasks required saccades to a single target presented randomly 4 deg to the right or left of a fixation point. Three other tasks required sequential saccades from the left to the right. 75 parameters of the eye-movement data were collected for each child. On the basis of their reading, writing, and other cognitive performances, twelve children were considered dyslexic and were divided into two groups (D1 and D2). Group statistical comparisons revealed significant differences between control and dyslexic subjects. In general, in the standard tasks the dyslexic subjects had poorer fixation quality, failed more often to hit the target at once, had smaller primary saccades, and had shorter reaction times to the left as compared with the control group. The control group and group D1 dyslexics showed an asymmetrical distribution of reaction times, but in opposite directions. Group D2 dyslexics made more anticipatory and express saccades, they undershot the target more often in comparison with the control group, and almost never overshot it. In the sequential tasks group D1 subjects made fewer and larger saccades in a shorter time and group D2 subjects had shorter fixation durations than the subjects of the control group.


Sign in / Sign up

Export Citation Format

Share Document