scholarly journals How do regularities bias attention to visual targets?

2019 ◽  
Vol 19 (10) ◽  
pp. 26c
Author(s):  
Ru Qi Yu ◽  
Jiaying Zhao
Author(s):  
Sander Martens ◽  
Addie Johnson ◽  
Martje Bolle ◽  
Jelmer Borst

The human mind is severely limited in processing concurrent information at a conscious level of awareness. These temporal restrictions are clearly reflected in the attentional blink (AB), a deficit in reporting the second of two targets when it occurs 200–500 ms after the first. However, we recently reported that some individuals do not show a visual AB, and presented psychophysiological evidence that target processing differs between “blinkers” and “nonblinkers”. Here, we present evidence that visual nonblinkers do show an auditory AB, which suggests that a major source of attentional restriction as reflected in the AB is likely to be modality-specific. In Experiment 3, we show that when the difficulty in identifying visual targets is increased, nonblinkers continue to show little or no visual AB, suggesting that the presence of an AB in the auditory but not in the visual modality is not due to a difference in task difficulty.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Charlotte Canteloup ◽  
Mabia B. Cera ◽  
Brendan J. Barrett ◽  
Erica van de Waal

AbstractSocial learning—learning from others—is the basis for behavioural traditions. Different social learning strategies (SLS), where individuals biasedly learn behaviours based on their content or who demonstrates them, may increase an individual’s fitness and generate behavioural traditions. While SLS have been mostly studied in isolation, their interaction and the interplay between individual and social learning is less understood. We performed a field-based open diffusion experiment in a wild primate. We provided two groups of vervet monkeys with a novel food, unshelled peanuts, and documented how three different peanut opening techniques spread within the groups. We analysed data using hierarchical Bayesian dynamic learning models that explore the integration of multiple SLS with individual learning. We (1) report evidence of social learning compared to strictly individual learning, (2) show that vervets preferentially socially learn the technique that yields the highest observed payoff and (3) also bias attention toward individuals of higher rank. This shows that behavioural preferences can arise when individuals integrate social information about the efficiency of a behaviour alongside cues related to the rank of a demonstrator. When these preferences converge to the same behaviour in a group, they may result in stable behavioural traditions.


2010 ◽  
Vol 6 (5) ◽  
pp. 639-645 ◽  
Author(s):  
Joshua M. Carlson ◽  
Karen S. Reinke ◽  
Pamela J. LaMontagne ◽  
Reza Habib

1991 ◽  
Vol 31 (4) ◽  
pp. 693-715 ◽  
Author(s):  
James W. Gnadt ◽  
R. Martyn Bracewell ◽  
Richard A. Andersen

2021 ◽  
Vol 49 (12) ◽  
pp. 1-11
Author(s):  
Cheng Kang ◽  
Nan Ye ◽  
Fangwen Zhang ◽  
Yanwen Wu ◽  
Guichun Jin ◽  
...  

Although studies have investigated the influence of the emotionality of primes on the cross-modal affective priming effect, it is unclear whether this effect is due to the contribution of the arousal or the valence of primes. We explored how the valence and arousal of primes influenced the cross-modal affective priming effect. In Experiment 1 we manipulated the valence of primes (positive and negative) that were matched by arousal. In Experiments 2 and 3 we manipulated the arousal of primes under the conditions of positive and negative valence, respectively. Affective words were used as auditory primes and affective faces were used as visual targets in a priming task. The results suggest that the valence of primes modulated the cross-modal affective priming effect but that the arousal of primes did not influence the priming effect. Only when the priming stimuli were positive did the cross-modal affective priming effect occur, but negative primes did not produce a priming effect. In addition, for positive but not negative primes, the arousal of primes facilitated the processing of subsequent targets. Our findings have great significance for understanding the interaction of different modal affective information.


2018 ◽  
Vol 27 (2) ◽  
pp. 349-360 ◽  
Author(s):  
Bahram Kheradmand ◽  
Julian Cassano ◽  
Selena Gray ◽  
James C. Nieh

1994 ◽  
Vol 71 (3) ◽  
pp. 1250-1253 ◽  
Author(s):  
G. S. Russo ◽  
C. J. Bruce

1. We studied neuronal activity in the monkey's frontal eye field (FEF) in conjunction with saccades directed to auditory targets. 2. All FEF neurons with movement activity preceding saccades to visual targets also were active preceding saccades to auditory targets, even when such saccades were made in the dark. Movement cells generally had comparable bursts for aurally and visually guided saccades; visuomovement cells often had weaker bursts in conjunction with aurally guided saccades. 3. When these cells were tested from different initial fixation directions, movement fields associated with aurally guided saccades, like fields mapped with visual targets, were a function of saccade dimensions, and not the speaker's spatial location. Thus, even though sound location cues are chiefly craniotopic, the crucial factor for a FEF discharge before aurally guided saccades was the location of auditory target relative to the current direction of gaze. 4. Intracortical microstimulation at the sites of these cells evoked constant-vector saccades, and not goal-directed saccades. The direction and size of electrically elicited saccades generally matched the cell's movement field for aurally guided saccades. 5. Thus FEF activity appears to have a role in aurally guided as well as visually guided saccades. Moreover, visual and auditory target representations, although initially obtained in different coordinate systems, appear to converge to a common movement vector representation at the FEF stage of saccadic processing that is appropriate for transmittal to saccade-related burst neurons in the superior colliculus and pons.


Sign in / Sign up

Export Citation Format

Share Document