covert attention
Recently Published Documents


TOTAL DOCUMENTS

295
(FIVE YEARS 87)

H-INDEX

41
(FIVE YEARS 4)

Vision ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 3
Author(s):  
Rébaï Soret ◽  
Pom Charras ◽  
Christophe Hurter ◽  
Vsevolod Peysakhovich

Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference in processing suggests that we have different processes for directing our attention to objects in front of us (front space) or behind us (rear space). In this study, we investigated the effects of attentional orienting in front and rear space consecutive of visual or auditory endogenous cues. Twenty-one participants performed a modified version of the Posner paradigm in virtual reality during a spaceship discrimination task. An eye tracker integrated into the virtual reality headset was used to make sure that the participants did not move their eyes and used their covert attention. The results show that informative cues produced faster response times than non-informative cues but no impact on target identification was observed. In addition, we observed faster response times when the target occurred in front space rather than in rear space. These results are consistent with an orienting cognitive process differentiation in the front and rear spaces. Several explanations are discussed. No effect was found on subjects’ eye movements, suggesting that participants did not use their overt attention to improve task performance.


2022 ◽  
Author(s):  
Qi Zhang ◽  
Zhibang Huang ◽  
Liang Li ◽  
Sheng Li

Visual search in a complex environment requires efficient discrimination between target and distractors. Training serves as an effective approach to improve visual search performance when the target does not automatically pop out from the distractors. In the present study, we trained subjects on a conjunction visual search task and examined the training effects in behavior and eye movement from Experiments 1 to 4. The results showed that training improved behavioral performance and reduced the number of saccades and overall scanning time. Training also increased the search initiation time before the first saccade and the proportion of trials in which the subjects correctly identified the target without any saccade, but these effects were modulated by stimulus' parameters. In Experiment 5, we replicated these training effects when eye movements and EEG signals were recorded simultaneously. The results revealed significant N2pc components after the stimulus onset (i.e., stimulus-locked) and before the first saccade (i.e., saccade-locked) when the search target was the trained one. These N2pc components can be considered as the neural signatures for the training-induced boost of covert attention to the trained target. The enhanced covert attention led to a beneficial tradeoff between search initiation time and the number of saccades as a small amount of increase in search initiation time could result in a larger reduction in scanning time. These findings suggest that the enhanced covert attention to target and optimized overt eye movements are coordinated together to facilitate visual search training.


2021 ◽  
Vol 15 ◽  
Author(s):  
Yajun Zhou ◽  
Li Hu ◽  
Tianyou Yu ◽  
Yuanqing Li

Covert attention aids us in monitoring the environment and optimizing performance in visual tasks. Past behavioral studies have shown that covert attention can enhance spatial resolution. However, electroencephalography (EEG) activity related to neural processing between central and peripheral vision has not been systematically investigated. Here, we conducted an EEG study with 25 subjects who performed covert attentional tasks at different retinal eccentricities ranging from 0.75° to 13.90°, as well as tasks involving overt attention and no attention. EEG signals were recorded with a single stimulus frequency to evoke steady-state visual evoked potentials (SSVEPs) for attention evaluation. We found that the SSVEP response in fixating at the attended location was generally negatively correlated with stimulus eccentricity as characterized by Euclidean distance or horizontal and vertical distance. Moreover, more pronounced characteristics of SSVEP analysis were also acquired in overt attention than in covert attention. Furthermore, offline classification of overt attention, covert attention, and no attention yielded an average accuracy of 91.42%. This work contributes to our understanding of the SSVEP representation of attention in humans and may also lead to brain-computer interfaces (BCIs) that allow people to communicate with choices simply by shifting their attention to them.


2021 ◽  
Vol 12 ◽  
Author(s):  
Kendra Gimhani Kandana Arachchige ◽  
Wivine Blekic ◽  
Isabelle Simoes Loureiro ◽  
Laurent Lefebvre

Numerous studies have explored the benefit of iconic gestures in speech comprehension. However, only few studies have investigated how visual attention was allocated to these gestures in the context of clear versus degraded speech and the way information is extracted for enhancing comprehension. This study aimed to explore the effect of iconic gestures on comprehension and whether fixating the gesture is required for information extraction. Four types of gestures (i.e., semantically and syntactically incongruent iconic gestures, meaningless configurations, and congruent iconic gestures) were presented in a sentence context in three different listening conditions (i.e., clear, partly degraded or fully degraded speech). Using eye tracking technology, participants’ gaze was recorded, while they watched video clips after which they were invited to answer simple comprehension questions. Results first showed that different types of gestures differently attract attention and that the more speech was degraded, the less participants would pay attention to gestures. Furthermore, semantically incongruent gestures appeared to particularly impair comprehension although not being fixated while congruent gestures appeared to improve comprehension despite also not being fixated. These results suggest that covert attention is sufficient to convey information that will be processed by the listener.


2021 ◽  
Author(s):  
◽  
Matthew David Weaver

<p>People are constantly confronted by a barrage of visual information. Visual attention is the crucial mechanism which selects for further processing, subsets of information which are most behaviourally relevant, allowing us to function effectively within our everyday environment. This thesis explored how semantic information (i.e., information which has meaning) encountered within the environment influences the selective orienting of visual attention. Past research has shown semantic information does affect the orienting of attention, but the processes by which it does so remain unclear. The extent of semantic influence on the visual attention system was determined by parsing visual orienting into the tractable components of covert and overt orienting, and capture and hold process stages therein. This thesis consisted of a series of experiments which were designed, utilising well- established paradigms and semantic manipulations in concert with eye-tracking techniques, to test whether the capture and hold of either overt or covert forms of visual attention were influenced by semantic information. Taking together the main findings across all experiments, the following conclusions were drawn. 1) Semantic information differentially influences covert and overt attentional orienting processes. 2) The capture and hold of covert attention is generally uninfluenced by semantic information. 3) Semantic information briefly encountered in the environment can facilitate or prime action independent of covert attentional orienting.4) Overt attention can be both preferentially captured and held by semantically salient information encountered in visual environments. The visual attentional system thus appears to have a complex relationship with semantic information encountered in the visual environment. Semantic information has a differential influence on selective orienting processes that depends on the form of orienting employed and a range of circumstances under which attentional selection takes place.</p>


2021 ◽  
Author(s):  
◽  
Matthew David Weaver

<p>People are constantly confronted by a barrage of visual information. Visual attention is the crucial mechanism which selects for further processing, subsets of information which are most behaviourally relevant, allowing us to function effectively within our everyday environment. This thesis explored how semantic information (i.e., information which has meaning) encountered within the environment influences the selective orienting of visual attention. Past research has shown semantic information does affect the orienting of attention, but the processes by which it does so remain unclear. The extent of semantic influence on the visual attention system was determined by parsing visual orienting into the tractable components of covert and overt orienting, and capture and hold process stages therein. This thesis consisted of a series of experiments which were designed, utilising well- established paradigms and semantic manipulations in concert with eye-tracking techniques, to test whether the capture and hold of either overt or covert forms of visual attention were influenced by semantic information. Taking together the main findings across all experiments, the following conclusions were drawn. 1) Semantic information differentially influences covert and overt attentional orienting processes. 2) The capture and hold of covert attention is generally uninfluenced by semantic information. 3) Semantic information briefly encountered in the environment can facilitate or prime action independent of covert attentional orienting.4) Overt attention can be both preferentially captured and held by semantically salient information encountered in visual environments. The visual attentional system thus appears to have a complex relationship with semantic information encountered in the visual environment. Semantic information has a differential influence on selective orienting processes that depends on the form of orienting employed and a range of circumstances under which attentional selection takes place.</p>


2021 ◽  
Vol 21 (11) ◽  
pp. 5
Author(s):  
Amy Chow ◽  
Yiwei Quan ◽  
Celine Chui ◽  
Roxane J. Itier ◽  
Benjamin Thompson
Keyword(s):  

2021 ◽  
Author(s):  
Nina M Hanning ◽  
Marc M Himmelberg ◽  
Marisa Carrasco

Human visual performance is not only better at the fovea and decreases with eccentricity, but also has striking radial asymmetries around the visual field: At a fixed eccentricity, it is better along (1) the horizontal than vertical meridian and (2) the lower than upper vertical meridian. These asymmetries, known as performance fields, are pervasive -they emerge for many visual dimensions, regardless of head rotation, stimulus orientation or display luminance- and resilient -they are not alleviated by covert exogenous or endogenous attention, deployed in the absence of eye movements. Performance fields have been studied exclusively during eye fixation. However, a major driver of everyday attentional orienting is saccade preparation, during which visual attention automatically shifts to the future eye fixation. This presaccadic shift of attention is considered strong and compulsory, and relies on fundamentally different neural computations and substrates than covert attention. Given these differences, we investigated whether presaccadic attention can compensate for the ubiquitous performance asymmetries observed during eye fixation. Our data replicate polar performance asymmetries during fixation and document the same asymmetries during saccade preparation. Crucially, however, presaccadic attention enhanced contrast sensitivity at the horizontal and lower vertical meridian, but not at the upper vertical meridian. Thus, instead of attenuating polar performance asymmetries, presaccadic attention exacerbates them.


Sign in / Sign up

Export Citation Format

Share Document