scholarly journals When Sharing Attention Does Not Help: Reduced Attention Effect in Pairs

2020 ◽  
Author(s):  
Juan Camilo Avendano Diaz ◽  
Ziyi Wang ◽  
Xun He

Research has generally suggested enhanced cognitive performance when performing jointly with others in social settings. However, the interpersonal influence on visuospatial attention was not well understood. With a newly developed dual attention paradigm, we investigated how paying attention to the same spatial location with another person affects one’s attention performance. Participant pairs independently performed go/no-go tasks to visual targets while sustainedly attending to same or different locations, and showed reduced attention effects when sharing spatial attention (Experiment 1, N = 40). This dual attention effect relied on the presence of another individual performing a similar task, being reversed when participants performed the same task in isolation (Experiment 2, N = 38), and persisted under an increased perceptual load (Experiment 3, N = 45). These data showed a diminishing effect of shared attention, likely driven by stronger response inhibition or increased mentalizing/monitoring when people attended together.

2016 ◽  
Vol 30 (3) ◽  
pp. 102-113 ◽  
Author(s):  
Chun-Hao Wang ◽  
Chun-Ming Shih ◽  
Chia-Liang Tsai

Abstract. This study aimed to assess whether brain potentials have significant influences on the relationship between aerobic fitness and cognition. Behavioral and electroencephalographic (EEG) data was collected from 48 young adults when performing a Posner task. Higher aerobic fitness is related to faster reaction times (RTs) along with greater P3 amplitude and shorter P3 latency in the valid trials, after controlling for age and body mass index. Moreover, RTs were selectively related to P3 amplitude rather than P3 latency. Specifically, the bootstrap-based mediation model indicates that P3 amplitude mediates the relationship between fitness level and attention performance. Possible explanations regarding the relationships among aerobic fitness, cognitive performance, and brain potentials are discussed.


SLEEP ◽  
2021 ◽  
Vol 44 (Supplement_2) ◽  
pp. A117-A117
Author(s):  
Janna Mantua ◽  
Carolyn Mickelson ◽  
Jacob Naylor ◽  
Bradley Ritland ◽  
Alexxa Bessey ◽  
...  

Abstract Introduction Sleep loss that is inherent to military operations can lead to cognitive errors and potential mission failure. Single Nucleotide Polymorphisms (SNPs) allele variations of several genes (COMT, ADORA2A, TNFa, CLOCK, DAT1) have been linked with inter-individual cognitive resilience to sleep loss through various mechanisms. U.S. Army Soldiers with resilience-related alleles may be better-suited to perform cognitively-arduous duties under conditions of sleep loss than those without these alleles. However, military-wide genetic screening is costly, arduous, and infeasible. This study tested whether a brief survey of subjective resilience to sleep loss (1) can demarcate soldiers with and without resilience-related alleles, and, if so, (2) can predict cognitive performance under conditions of sleep loss. Methods Six SNPs from the aforementioned genes were sequenced from 75 male U.S. Army special operations Soldiers (age 25.7±4.1). Psychomotor vigilance, response inhibition, and decision-making were tested after a night of mission-driven total sleep deprivation. The Iowa Resilience to Sleeplessness Test (iREST) Cognitive Subscale, which measures subjective cognitive resilience to sleep loss, was administered after a week of recovery sleep. A receiver operating characteristic (ROC) curve was used to determine whether the iREST Cognitive Subscale can discriminate between gene carriers, and a cutoff score was determined. Cognitive performance after sleep deprivation was compared between those below/above the cutoff score using t-tests or Mann-Whitney U tests. Results The iREST discriminated between allele variations for COMT (ROC=.65,SE=.07,p=.03), with an optimal cutoff score of 3.03 out of 5, with 90% sensitivity and 51.4% specificity. Soldiers below the cutoff score had significantly poorer for psychomotor vigilance reaction time (t=-2.39,p=.02), response inhibition errors of commission (U=155.00,W=246.00,p=.04), and decision-making reaction time (t=2.13,p=.04) than Soldiers above the cutoff score. Conclusion The iREST Cognitive Subscale can discriminate between those with and without specific vulnerability/resilience-related genotypes. If these findings are replicated, the iREST Cognitive Subscale could be used to help military leaders make decisions about proper personnel placement when sleep loss is unavoidable. This would likely result in increased safety and improved performance during military missions. Support (if any) Support for this study came from the Military Operational Medicine Research Program of the United States Army Medical Research and Development Command.


2021 ◽  
Vol 49 (9) ◽  
pp. 1-6
Author(s):  
Jiyou Gu ◽  
Huiqin Dong

Using a spatial-cueing paradigm in which trait words were set as visual cues and gender words were set as auditory targets, we examined whether cross-modal spatial attention was influenced by gender stereotypes. Results of an experiment conducted with 24 participants indicate that they tended to focus on targets in the valid-cue condition (i.e., the cues located at the same position as targets), regardless of the modality of cues and targets, which is consistent with the cross-modal attention effect found in previous studies. Participants tended to focus on targets that were stereotype-consistent with cues only when the cues were valid, which shows that stereotype-consistent information facilitated visual–auditory cross-modal spatial attention. These results suggest that cognitive schema, such as gender stereotypes, have an effect on cross-modal spatial attention.


eLife ◽  
2016 ◽  
Vol 5 ◽  
Author(s):  
Tao Yao ◽  
Madhura Ketkar ◽  
Stefan Treue ◽  
B Suresh Krishna

Maintaining attention at a task-relevant spatial location while making eye-movements necessitates a rapid, saccade-synchronized shift of attentional modulation from the neuronal population representing the task-relevant location before the saccade to the one representing it after the saccade. Currently, the precise time at which spatial attention becomes fully allocated to the task-relevant location after the saccade remains unclear. Using a fine-grained temporal analysis of human peri-saccadic detection performance in an attention task, we show that spatial attention is fully available at the task-relevant location within 30 milliseconds after the saccade. Subjects tracked the attentional target veridically throughout our task: i.e. they almost never responded to non-target stimuli. Spatial attention and saccadic processing therefore co-ordinate well to ensure that relevant locations are attentionally enhanced soon after the beginning of each eye fixation.


2010 ◽  
Vol 22 (2) ◽  
pp. 347-361 ◽  
Author(s):  
David V. Smith ◽  
Ben Davis ◽  
Kathy Niu ◽  
Eric W. Healy ◽  
Leonardo Bonilha ◽  
...  

Neuroimaging studies suggest that a fronto-parietal network is activated when we expect visual information to appear at a specific spatial location. Here we examined whether a similar network is involved for auditory stimuli. We used sparse fMRI to infer brain activation while participants performed analogous visual and auditory tasks. On some trials, participants were asked to discriminate the elevation of a peripheral target. On other trials, participants made a nonspatial judgment. We contrasted trials where the participants expected a peripheral spatial target to those where they were cued to expect a central target. Crucially, our statistical analyses were based on trials where stimuli were anticipated but not presented, allowing us to directly infer perceptual orienting independent of perceptual processing. This is the first neuroimaging study to use an orthogonal-cuing paradigm (with cues predicting azimuth and responses involving elevation discrimination). This aspect of our paradigm is important, as behavioral cueing effects in audition are classically only observed when participants are asked to make spatial judgments. We observed similar fronto-parietal activation for both vision and audition. In a second experiment that controlled for stimulus properties and task difficulty, participants made spatial and temporal discriminations about musical instruments. We found that the pattern of brain activation for spatial selection of auditory stimuli was remarkably similar to what we found in our first experiment. Collectively, these results suggest that the neural mechanisms supporting spatial attention are largely similar across both visual and auditory modalities.


2016 ◽  
Vol 127 (3) ◽  
pp. e59
Author(s):  
E. Iacovelli ◽  
S. Pro ◽  
S. Tarantino ◽  
C. Casciani ◽  
F. Vigevano ◽  
...  

2017 ◽  
Vol 118 (3) ◽  
pp. 1903-1913 ◽  
Author(s):  
Amy M. Ni ◽  
John H. R. Maunsell

Spatial attention improves perception of attended parts of a scene, a behavioral enhancement accompanied by modulations of neuronal firing rates. These modulations vary in size across neurons in the same brain area. Models of normalization explain much of this variance in attention modulation with differences in tuned normalization across neurons (Lee J, Maunsell JHR. PLoS One 4: e4651, 2009; Ni AM, Ray S, Maunsell JHR. Neuron 73: 803–813, 2012). However, recent studies suggest that normalization tuning varies with spatial location both across and within neurons (Ruff DA, Alberts JJ, Cohen MR. J Neurophysiol 116: 1375–1386, 2016; Verhoef BE, Maunsell JHR. eLife 5: e17256, 2016). Here we show directly that attention modulation and normalization tuning do in fact covary within individual neurons, in addition to across neurons as previously demonstrated. We recorded the activity of isolated neurons in the middle temporal area of two rhesus monkeys as they performed a change-detection task that controlled the focus of spatial attention. Using the same two drifting Gabor stimuli and the same two receptive field locations for each neuron, we found that switching which stimulus was presented at which location affected both attention modulation and normalization in a correlated way within neurons. We present an equal-maximum-suppression spatially tuned normalization model that explains this covariance both across and within neurons: each stimulus generates equally strong suppression of its own excitatory drive, but its suppression of distant stimuli is typically less. This new model specifies how the tuned normalization associated with each stimulus location varies across space both within and across neurons, changing our understanding of the normalization mechanism and how attention modulations depend on this mechanism. NEW & NOTEWORTHY Tuned normalization studies have demonstrated that the variance in attention modulation size seen across neurons from the same cortical area can be largely explained by between-neuron differences in normalization strength. Here we demonstrate that attention modulation size varies within neurons as well and that this variance is largely explained by within-neuron differences in normalization strength. We provide a new spatially tuned normalization model that explains this broad range of observed normalization and attention effects.


2019 ◽  
Author(s):  
Daria Kvasova ◽  
Salvador Soto-Faraco

AbstractRecent studies show that cross-modal semantic congruence plays a role in spatial attention orienting and visual search. However, the extent to which these cross-modal semantic relationships attract attention automatically is still unclear. At present the outcomes of different studies have been inconsistent. Variations in task-relevance of the cross-modal stimuli (from explicitly needed, to completely irrelevant) and the amount of perceptual load may account for the mixed results of previous experiments. In the present study, we addressed the effects of audio-visual semantic congruence on visuo-spatial attention across variations in task relevance and perceptual load. We used visual search amongst images of common objects paired with characteristic object sounds (e.g., guitar image and chord sound). We found that audio-visual semantic congruence speeded visual search times when the cross-modal objects are task relevant, or when they are irrelevant but presented under low perceptual load. Instead, when perceptual load is high, sounds fail to attract attention towards the congruent visual images. These results lead us to conclude that object-based crossmodal congruence does not attract attention automatically and requires some top-down processing.


Sign in / Sign up

Export Citation Format

Share Document