scholarly journals Semantic Influences on Overt and Covert Visual Attention

2021 ◽  
Author(s):  
◽  
Matthew David Weaver

<p>People are constantly confronted by a barrage of visual information. Visual attention is the crucial mechanism which selects for further processing, subsets of information which are most behaviourally relevant, allowing us to function effectively within our everyday environment. This thesis explored how semantic information (i.e., information which has meaning) encountered within the environment influences the selective orienting of visual attention. Past research has shown semantic information does affect the orienting of attention, but the processes by which it does so remain unclear. The extent of semantic influence on the visual attention system was determined by parsing visual orienting into the tractable components of covert and overt orienting, and capture and hold process stages therein. This thesis consisted of a series of experiments which were designed, utilising well- established paradigms and semantic manipulations in concert with eye-tracking techniques, to test whether the capture and hold of either overt or covert forms of visual attention were influenced by semantic information. Taking together the main findings across all experiments, the following conclusions were drawn. 1) Semantic information differentially influences covert and overt attentional orienting processes. 2) The capture and hold of covert attention is generally uninfluenced by semantic information. 3) Semantic information briefly encountered in the environment can facilitate or prime action independent of covert attentional orienting.4) Overt attention can be both preferentially captured and held by semantically salient information encountered in visual environments. The visual attentional system thus appears to have a complex relationship with semantic information encountered in the visual environment. Semantic information has a differential influence on selective orienting processes that depends on the form of orienting employed and a range of circumstances under which attentional selection takes place.</p>

2021 ◽  
Author(s):  
◽  
Matthew David Weaver

<p>People are constantly confronted by a barrage of visual information. Visual attention is the crucial mechanism which selects for further processing, subsets of information which are most behaviourally relevant, allowing us to function effectively within our everyday environment. This thesis explored how semantic information (i.e., information which has meaning) encountered within the environment influences the selective orienting of visual attention. Past research has shown semantic information does affect the orienting of attention, but the processes by which it does so remain unclear. The extent of semantic influence on the visual attention system was determined by parsing visual orienting into the tractable components of covert and overt orienting, and capture and hold process stages therein. This thesis consisted of a series of experiments which were designed, utilising well- established paradigms and semantic manipulations in concert with eye-tracking techniques, to test whether the capture and hold of either overt or covert forms of visual attention were influenced by semantic information. Taking together the main findings across all experiments, the following conclusions were drawn. 1) Semantic information differentially influences covert and overt attentional orienting processes. 2) The capture and hold of covert attention is generally uninfluenced by semantic information. 3) Semantic information briefly encountered in the environment can facilitate or prime action independent of covert attentional orienting.4) Overt attention can be both preferentially captured and held by semantically salient information encountered in visual environments. The visual attentional system thus appears to have a complex relationship with semantic information encountered in the visual environment. Semantic information has a differential influence on selective orienting processes that depends on the form of orienting employed and a range of circumstances under which attentional selection takes place.</p>


1998 ◽  
Vol 79 (3) ◽  
pp. 1574-1578 ◽  
Author(s):  
Ewa Wojciulik ◽  
Nancy Kanwisher ◽  
Jon Driver

Wojciulik, Ewa, Nancy Kanwisher, and Jon Driver. Covert visual attention modulates face-specific activity in the human fusiform gyrus: an fMRI study. J. Neurophysiol. 79: 1574–1578, 1998. Several lines of evidence demonstrate that faces undergo specialized processing within the primate visual system. It has been claimed that dedicated modules for such biologically significant stimuli operate in a mandatory fashion whenever their triggering input is presented. However, the possible role of covert attention to the activating stimulus has never been examined for such cases. We used functional magnetic resonance imaging to test whether face-specific activity in the human fusiform face area (FFA) is modulated by covert attention. The FFA was first identified individually in each subject as the ventral occipitotemporal region that responded more strongly to visually presented faces than to other visual objects under passive central viewing. This then served as the region of interest within which attentional modulation was tested independently, using active tasks and a very different stimulus set. Subjects viewed brief displays each comprising two peripheral faces and two peripheral houses (all presented simultaneously). They performed a matching task on either the two faces or the two houses, while maintaining central fixation to equate retinal stimulation across tasks. Signal intensity was reliably stronger during face-matching than house matching in both right- and left-hemisphere predefined FFAs. These results show that face-specific fusiform activity is reduced when stimuli appear outside (vs. inside) the focus of attention. Despite the modular nature of the FFA (i.e., its functional specificity and anatomic localization), face processing in this region nonetheless depends on voluntary attention.


1995 ◽  
Vol 7 (2) ◽  
pp. 351-367 ◽  
Author(s):  
Deborah A. Pearson ◽  
Laura S. Yaffee ◽  
Katherine A. Loveland ◽  
Amy M. Norton

AbstractShifts in covert visual attention were compared in children with and without Attention Deficit Hyperactivity Disorder (ADHD) to determine if children with ADHD have developmental immaturities in covert attention, relative to their non-ADHD peers. Children were told to orient attention to a central fixation point and were then cued, by both central and peripheral cues, to direct their attention to either the left or right peripheral fields. Following variable time intervals, the target appeared and reaction times and errors were recorded. Although performance of all subjects showed faciliation when attention was directed by valid cues and inhibition when attention was directed by invalid cues, the performance of children with ADHD was far more disrupted when their attention was misled by invalid cues, especially at longer intervals. This inconsistency was reflected in significantly higher error rates in the ADHD group. They also showed a pattern of attentional “waxing and waning” in performance over longer time intervals that has been previously found in auditory attention switching over time within trials in children with ADHD. Overall, results are inconsistent with developmentally immature covert attention skills in ADHD. Findings are discussed in terms of the concept of global “developmental immaturity” in the attention skills of children with ADHD.


2010 ◽  
Vol 104 (6) ◽  
pp. 3074-3083 ◽  
Author(s):  
Sucharit Katyal ◽  
Samir Zughni ◽  
Clint Greene ◽  
David Ress

Experiments were performed to examine the topography of covert visual attention signals in human superior colliculus (SC), both across its surface and in its depth. We measured the retinotopic organization of SC to direct visual stimulation using a 90° wedge of moving dots that slowly rotated around fixation. Subjects ( n = 5) were cued to perform a difficult speed-discrimination task in the rotating region. To measure the retinotopy of covert attention, we used a full-field array of similarly moving dots. Subjects were cued to perform the same speed-discrimination task within a 90° wedge-shaped region, and only the cue rotated around fixation. High-resolution functional magnetic resonance imaging (fMRI, 1.2 mm voxels) data were acquired throughout SC. These data were then aligned to a high-resolution T1-weighted reference volume. The SC was segmented in this volume so that the surface of the SC could be computationally modeled and to permit calculation of a depth map for laminar analysis. Retinotopic maps were obtained for both direct visual stimulation and covert attention. These maps showed a similar spatial distribution to visual stimulation maps observed in rhesus macaque and were in registration with each other. Within the depth of SC, both visual attention and stimulation produced activity primarily in the superficial and intermediate layers, but stimulation activity extended significantly more deeply than attention.


2012 ◽  
Vol 25 (0) ◽  
pp. 61 ◽  
Author(s):  
Thomas D. Wright ◽  
Jamie Ward ◽  
Sarah Simonon ◽  
Aaron Margolis

Sensory substitution is the representation of information from one sensory modality (e.g., vision) within another modality (e.g., audition). We used a visual-to-auditory sensory substitution device (SSD) to explore the effect of incongruous (true-)visual and substituted-visual signals on visual attention. In our multisensory sensory substitution paradigm, both visual and sonified-visual information were presented. By making small alterations to the sonified image, but not the seen image, we introduced audio–visual mismatch. The alterations consisted of the addition of a small image (for instance, the Wally character from the ‘Where’s Wally?’ books) within the original image. Participants were asked to listen to the sonified image and identify which quadrant contained the alteration. Monitoring eye movements revealed the effect of the audio–visual mismatch on covert visual attention. We found that participants consistently fixated more, and dwelled for longer, in the quadrant corresponding to the location (in the sonified image) of the target. This effect was not contingent on the participant reporting the location of the target correctly, which indicates a low-level interaction between an auditory stream and visual attention. We propose that this suggests a shared visual workspace that is accessible by visual sources other than the eyes. If this is indeed the case, it would support the development of other, more esoteric, forms of sensory substitution. These could include an expanded field of view (e.g., rear-view cameras), overlaid visual information (e.g., thermal imaging) or restoration of partial visual field loss (e.g., hemianopsia).


Author(s):  
Yoko Higuchi ◽  
Satoshi Inoue ◽  
Hiroto Hamada ◽  
Takatsune Kumada

Objective The objective of this study was to investigate whether an artificial optic flow created by dot motion guides attention in a driving scene. Background To achieve safe driving, it is essential to understand the characteristics of human visual information processing as well as to provide appropriate support for drivers. Past research has demonstrated that expanding optic flow guides visual attention to the focus of expansion. Optic flow is an attractive candidate for use as a cue to direct drivers’ attention toward the significant information. The question addressed concerns whether an artificial optic flow can successfully guide attention even in a traffic situation involving the optic flow that naturally occurs while driving. Method We developed a visual search paradigm embedded in a video of a driving scene. Participants first observed an optic flow motion pattern superimposed on the video for brief period; next, when the optic flow and video ceased, they searched a static display for a target among multiple distractors. Results The target detection was faster when a target’s locus coincided with the implied focus of expansion from the preceding optic flow (vs. other loci). Conclusion The artificial optic flow guides attention and facilitates searching objects at the focus of expansion even when the optic flow was superimposed on a driving scene. Application Optic flow can be an effective cue for guiding drivers’ attention in a traffic situation. This finding contributes to the understanding of visual attention in moving space and helps develop technology for traffic safety.


2018 ◽  
Author(s):  
Amarender R. Bogadhi ◽  
Anil Bollimunta ◽  
David A. Leopold ◽  
Richard J. Krauzlis

AbstractNeurophysiology studies of covert visual attention in monkeys have emphasized the modulation of sensory neural responses in the visual cortex. At the same time, electrophysiological correlates of attention have been reported in other cortical and subcortical structures, and recent fMRI studies have identified regions across the brain modulated by attention. Here we used fMRI in two monkeys performing covert attention tasks to reproduce and extend these findings in order to help establish a more complete list of brain structures involved in the control of attention. As expected from previous studies, we found attention-related modulation in frontal, parietal and visual cortical areas as well as the superior colliculus and pulvinar. We also found significant attention-related modulation in cortical regions not traditionally linked to attention – mid-STS areas (anterior FST and parts of IPa, PGa, TPO), as well as the caudate nucleus. A control experiment using a second-order orientation stimulus showed that the observed modulation in a subset of these mid-STS areas did not depend on visual motion. These results identify the mid-STS areas (anterior FST and parts of IPa, PGa, TPO) and caudate nucleus as potentially important brain regions in the control of covert visual attention in monkeys.


Author(s):  
Haiyang Wei ◽  
Zhixin Li ◽  
Feicheng Huang ◽  
Canlong Zhang ◽  
Huifang Ma ◽  
...  

Most existing image captioning methods use only the visual information of the image to guide the generation of captions, lack the guidance of effective scene semantic information, and the current visual attention mechanism cannot adjust the focus intensity on the image. In this article, we first propose an improved visual attention model. At each timestep, we calculated the focus intensity coefficient of the attention mechanism through the context information of the model, then automatically adjusted the focus intensity of the attention mechanism through the coefficient to extract more accurate visual information. In addition, we represented the scene semantic knowledge of the image through topic words related to the image scene, then added them to the language model. We used the attention mechanism to determine the visual information and scene semantic information that the model pays attention to at each timestep and combined them to enable the model to generate more accurate and scene-specific captions. Finally, we evaluated our model on Microsoft COCO (MSCOCO) and Flickr30k standard datasets. The experimental results show that our approach generates more accurate captions and outperforms many recent advanced models in various evaluation metrics.


2021 ◽  
Vol 3 (1) ◽  
pp. 1-46
Author(s):  
Alexander Krüger ◽  
Jan Tünnermann ◽  
Lukas Stratmann ◽  
Lucas Briese ◽  
Falko Dressler ◽  
...  

Abstract As a formal theory, Bundesen’s theory of visual attention (TVA) enables the estimation of several theoretically meaningful parameters involved in attentional selection and visual encoding. As of yet, TVA has almost exclusively been used in restricted empirical scenarios such as whole and partial report and with strictly controlled stimulus material. We present a series of experiments in which we test whether the advantages of TVA can be exploited in more realistic scenarios with varying degree of stimulus control. This includes brief experimental sessions conducted on different mobile devices, computer games, and a driving simulator. Overall, six experiments demonstrate that the TVA parameters for processing capacity and attentional weight can be measured with sufficient precision in less controlled scenarios and that the results do not deviate strongly from typical laboratory results, although some systematic differences were found.


2021 ◽  
pp. 216770262110302
Author(s):  
M. Justin Kim ◽  
Maxwell L. Elliott ◽  
Annchen R. Knodt ◽  
Ahmad R. Hariri

Past research on the brain correlates of trait anger has been limited by small sample sizes, a focus on relatively few regions of interest, and poor test–retest reliability of functional brain measures. To address these limitations, we conducted a data-driven analysis of variability in connectome-wide functional connectivity in a sample of 1,048 young adult volunteers. Multidimensional matrix regression analysis showed that self-reported trait anger maps onto variability in the whole-brain functional connectivity patterns of three brain regions that serve action-related functions: bilateral supplementary motor areas and the right lateral frontal pole. We then demonstrate that trait anger modulates the functional connectivity of these regions with canonical brain networks supporting somatomotor, affective, self-referential, and visual information processes. Our findings offer novel neuroimaging evidence for interpreting trait anger as a greater propensity to provoked action, which supports ongoing efforts to understand its utility as a potential transdiagnostic marker for disordered states characterized by aggressive behavior.


Sign in / Sign up

Export Citation Format

Share Document