scholarly journals Unravelling how low dominance in faces biases non-spatial attention

2019 ◽  
Vol 9 (1) ◽  
Author(s):  
Ashton Roberts ◽  
Romina Palermo ◽  
Troy A. W. Visser

AbstractAccording to the Dual Model of Social Hierarchy, one pathway for attaining social status is through dominance (coercion and intimidation). High dominance stimuli are known to more readily attract eye gaze and social attention. However, when there is a competition for non-spatial attentional resources, low dominance stimuli show an advantage. This low dominance bias was hypothesised to occur due to either counter-stereotypicality or attention competition. Here, these two hypotheses were examined across two experiments using modified versions of the attentional blink paradigm, used to measure non-spatial attention, and manipulations of facial dominance in both males and females. The results support the attention competition theory, suggesting that low dominance stimuli have a consistently strong ability to compete for attentional resources. Unexpectedly, high dominance stimuli fluctuate between having a strong and weak ability to compete for the same resources. The results challenge the current understanding of how humans interact with status.

2009 ◽  
Author(s):  
Khara Croswaite ◽  
Mei-Ching Lien ◽  
Eric Ruthruff ◽  
Min-Ju Liao

2014 ◽  
Vol 112 (6) ◽  
pp. 1307-1316 ◽  
Author(s):  
Isabel Dombrowe ◽  
Claus C. Hilgetag

The voluntary, top-down allocation of visual spatial attention has been linked to changes in the alpha-band of the electroencephalogram (EEG) signal measured over occipital and parietal lobes. In the present study, we investigated how occipitoparietal alpha-band activity changes when people allocate their attentional resources in a graded fashion across the visual field. We asked participants to either completely shift their attention into one hemifield, to balance their attention equally across the entire visual field, or to attribute more attention to one-half of the visual field than to the other. As expected, we found that alpha-band amplitudes decreased stronger contralaterally than ipsilaterally to the attended side when attention was shifted completely. Alpha-band amplitudes decreased bilaterally when attention was balanced equally across the visual field. However, when participants allocated more attentional resources to one-half of the visual field, this was not reflected in the alpha-band amplitudes, which just decreased bilaterally. We found that the performance of the participants was more strongly reflected in the coherence between frontal and occipitoparietal brain regions. We conclude that low alpha-band amplitudes seem to be necessary for stimulus detection. Furthermore, complete shifts of attention are directly reflected in the lateralization of alpha-band amplitudes. In the present study, a gradual allocation of visual attention across the visual field was only indirectly reflected in the alpha-band activity over occipital and parietal cortexes.


2012 ◽  
Vol 25 (0) ◽  
pp. 168
Author(s):  
Ruth Adam ◽  
Uta Noppeney

Capacity limitations of attentional resources allow only a fraction of sensory inputs to enter our awareness. Most prominently, in the attentional blink, the observer fails to detect the second of two rapidly successive targets that are presented in a sequence of distractor items. This study investigated whether phonological (in)congruency between visual target letters and spoken letters is modulated by subjects’ awareness. In a visual attentional blink paradigm, subjects were presented with two visual targets (buildings and capital Latin letters, respectively) in a sequence of rapidly presented distractor items. A beep was presented always with T1. We manipulated the presence/absence and phonological congruency of the spoken letter that was presented concurrently with T2. Subjects reported the identity of T1 and T2 and reported the visibility of T2. Behaviorally, subjects correctly identified T2 when it was reported to be either visible or unsure, while performances were below chance level when T2 was reported to be invisible. At the neural level, the anterior cingulate was activated for invisible > unsure > visible T2. In contrast, visible relative to invisible trials increased activation in bilateral cerebellum, pre/post-central gyri extending into parietal sulci and bilateral inferior occipital gyri. Incongruency effects were observed in the left inferior frontal gyrus, caudate nucleus and insula only for visible stimuli. In conclusion, phonological incongruency is processed differently when subjects are aware of the visual stimulus. This indicates that multisensory integration is not automatic but depends on subjects’ cognitive state.


2014 ◽  
Vol 13 (3) ◽  
pp. 437-443 ◽  
Author(s):  
Benjamin Balas ◽  
Jennifer L. Momsen

Plants, to many, are simply not as interesting as animals. Students typically prefer to study animals rather than plants and recall plants more poorly, and plants are underrepresented in the classroom. The observed paucity of interest for plants has been described as plant blindness, a term that is meant to encapsulate both the tendency to neglect plants in the environment and the lack of appreciation for plants’ functional roles. While the term plant blindness suggests a perceptual or attentional component to plant neglect, few studies have examined whether there are real differences in how plants and animals are perceived. Here, we use an established paradigm in visual cognition, the “attentional blink,” to compare the extent to which images of plants and animals capture attentional resources. We find that participants are better able to detect animals than plants in rapid image sequences and that visual attention has a different refractory period when a plant has been detected. These results suggest there are fundamental differences in how the visual system processes plants that may contribute to plant blindness. We discuss how perceptual and physiological constraints on visual processing may suggest useful strategies for characterizing and overcoming zoocentrism.


eLife ◽  
2018 ◽  
Vol 7 ◽  
Author(s):  
Martin Szinte ◽  
Donatas Jonikaitis ◽  
Dragan Rangelov ◽  
Heiner Deubel

Each saccade shifts the projections of the visual scene on the retina. It has been proposed that the receptive fields of neurons in oculomotor areas are predictively remapped to account for these shifts. While remapping of the whole visual scene seems prohibitively complex, selection by attention may limit these processes to a subset of attended locations. Because attentional selection consumes time, remapping of attended locations should evolve in time, too. In our study, we cued a spatial location by presenting an attention-capturing cue at different times before a saccade and constructed maps of attentional allocation across the visual field. We observed no remapping of attention when the cue appeared shortly before saccade. In contrast, when the cue appeared sufficiently early before saccade, attentional resources were reallocated precisely to the remapped location. Our results show that pre-saccadic remapping takes time to develop suggesting that it relies on the spatial and temporal dynamics of spatial attention.


2020 ◽  
Vol 33 (4-5) ◽  
pp. 383-416 ◽  
Author(s):  
Arianna Zuanazzi ◽  
Uta Noppeney

Abstract Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.


Author(s):  
Amit Almor

Conversation with a remote person can interfere with performing vision-based tasks. Two experiments tested the role of general executive resources and spatial attentional resources in this interference. Both experiments assessed performance in vision-based tasks as participants engaged in a language task involving a virtual remote speaker. In both experiments, the language task interfered with the vision task more when participants were speaking or planning what to say next than when they were listening. In Experiment 1, speaking or planning what to say next were also associated with higher interference from a visual distractor than listening, indicating that preparing to speak and speaking pose higher executive requirements than listening. In both experiments, localizing the voice of the remote speaker to the front of participants slightly reduced interference in comparison to other directions. This suggests that remote conversation requires spatial attention resources for representing the position of the remote person.


Author(s):  
Stéphane Grade ◽  
Nathalie Lefèvre ◽  
Mauro Pesenti

Recent findings suggest that number processing is intimately linked to space and attention orienting processes. For example, processing numbers induces shifts of spatial attention, with small numbers causing leftward shifts and large numbers causing rightward shifts, suggesting that number magnitude might be represented on a left-to-right mental number line. However, whether inducing spatial attention shifts would in turn influence number production, and whether such influence, if observed, would be restricted to the left-to-right orientation or would extend to an up-to-down orientation in space, remains a matter of debate. The present study assessed whether observing gaze movements, known to moderate spatial attention, was able to influence a random number generation task, and how different directions of the gaze moderated this influence. Participants were asked to randomly produce a number between 1 and 10 after they observed either a horizontal or a vertical eye gaze, or after they observed color changes as a control condition. The results revealed that number production was influenced by the prior presentation of specific gaze changes. Observing leftward or downward gaze led participants to produce more small than large numbers, whereas observing gaze oriented rightward and upward or observing color changes did not influence the magnitude of the numbers produced. These results show that the characteristics of the observed gaze changes primed number magnitude, but that this only held true for some movements, and these were not restricted to the left-to-right axis.


Sign in / Sign up

Export Citation Format

Share Document