overt attention
Recently Published Documents


TOTAL DOCUMENTS

102
(FIVE YEARS 36)

H-INDEX

17
(FIVE YEARS 2)

2021 ◽  
Vol 15 ◽  
Author(s):  
Yajun Zhou ◽  
Li Hu ◽  
Tianyou Yu ◽  
Yuanqing Li

Covert attention aids us in monitoring the environment and optimizing performance in visual tasks. Past behavioral studies have shown that covert attention can enhance spatial resolution. However, electroencephalography (EEG) activity related to neural processing between central and peripheral vision has not been systematically investigated. Here, we conducted an EEG study with 25 subjects who performed covert attentional tasks at different retinal eccentricities ranging from 0.75° to 13.90°, as well as tasks involving overt attention and no attention. EEG signals were recorded with a single stimulus frequency to evoke steady-state visual evoked potentials (SSVEPs) for attention evaluation. We found that the SSVEP response in fixating at the attended location was generally negatively correlated with stimulus eccentricity as characterized by Euclidean distance or horizontal and vertical distance. Moreover, more pronounced characteristics of SSVEP analysis were also acquired in overt attention than in covert attention. Furthermore, offline classification of overt attention, covert attention, and no attention yielded an average accuracy of 91.42%. This work contributes to our understanding of the SSVEP representation of attention in humans and may also lead to brain-computer interfaces (BCIs) that allow people to communicate with choices simply by shifting their attention to them.


2021 ◽  
Author(s):  
Jennifer Sudkamp ◽  
David Souto

To navigate safely, pedestrians need to accurately perceive and predict other road users’ motion trajectories. Previous research has shown that the way visual information is sampled affects motion perception. Here we asked how overt attention affects time-to-arrival prediction of oncoming vehicles when viewed from a pedestrian’s point of view in a virtual road-crossing scenario. In three online experiments, we tested time-to-arrival prediction accuracies when observers pursued an approaching vehicle, fixated towards the road-crossing area, fixated towards the road close to the vehicle’s trajectory or were free to view the scene. When the observer-vehicle distance was high, participants displayed a central tendency in their predicted arrival times, indicating that vehicle speed was insufficiently taken into account when estimating its time-to-arrival. This was especially the case when participants fixated towards the road-crossing area, resulting in time-to-arrival overestimation of slow-moving vehicles and underestimation of fast-moving vehicles. The central tendency bias decreased when participants pursued the vehicle or when the eccentricity between the fixation location and the vehicle trajectory was reduced. Our results identify an unfavorable visual sampling strategy as a potential risk factor for pedestrians and suggest that overt attention is best directed towards the direction of the approaching traffic to derive accurate time-to-arrival estimates. To support pedestrian safety, we conclude that the promotion of adequate visual sampling strategies should be considered in both traffic planning and safety training measures.


2021 ◽  
Author(s):  
Karin van Nispen ◽  
Kazuki Sekine ◽  
Ineke van der Meulen ◽  
Basil Christoph Preisig

Co-speech hand gestures are a ubiquitous form of nonverbal communication, which can express additional information that is not present in speech. Hand gestures may become more relevant when speech production is impaired as in patients with post-stroke aphasia. In fact, patients with aphasia produce more gestures than control speakers. Further, their gestures seem to be more relevant for the understanding of their communication. In the present study, we addressed the question whether the gestures produced by speakers with aphasia catch the attention of their addressees. Healthy volunteers (observers) watched short video clips while their eye movements were recorded. These video clips featured speakers with aphasia and control speakers describing two different scenarios (buying a sweater or having witnessed an accident). Our results show that hand gestures produced by speakers with aphasia are on average longer attended than gestures produced by control speakers. This effect is significant even when we control for the longer duration of the gestural movements in speakers with aphasia. Further, the amount of information in speech was correlated with gesture attention: gestures produced by speakers with less informative speech were attended more frequently. In conclusion, our results highlight two main points. First, overt attention for co-speech hand gesture increases with their communicative relevance. Second, these findings have clinical implications because they show that the extra effort that speakers with aphasia put into gesture is worthwhile, as interlocutors seem to notice their gestures.


2021 ◽  
Author(s):  
Leila Azizi ◽  
Ignacio Polti ◽  
Virginie van Wassenhove

AbstractWe seldom time life events intently yet recalling the duration of events is lifelike. Is episodic time the outcome of a rational after-thought or of physiological clocks keeping track of time without our conscious awareness of it? To answer this, we recorded human brain activity with magnetoencephalography (MEG) during quiet wakefulness. Unbeknownst to participants, we asked them after the MEG recording to guess its duration. In the absence of overt attention to time, the relative amount of time participants’ alpha brain rhythms (α ~10 Hz) were in bursting mode predicted participants’ retrospective duration estimate. This relation was absent when participants prospectively measured elapsed time during the MEG recording. We conclude that α bursts embody discrete states of awareness for episodic timing.One-Sentence SummaryIn the human brain, the relative number of alpha oscillatory bursts at ~10 Hz can tell time when the observer does not attend to it.


2021 ◽  
Author(s):  
Stephan Koenig ◽  
David Torrents-Rodas ◽  
Metin Üngör ◽  
Harald Lachnit

We used an implicit learning paradigm to examine the acquisition of color-reward associations when colors were task-irrelevant and attention to color was detrimental to performance. Our task required a manual classification response to a shape target and a correct response was rewarded with either 1 or 10 cent. The amount of reward was contingent on the color of a simultaneous color distractor and different colors were associated with low reward (always 1 Cent), partial reward (randomly either 1 or 10 Cent), and high reward (always 10 Cent). Attention to color was nonstrategic for maximizing reward because it interfered with the response to the target. We examined the potential of reward-associated colors to capture and hold overt attention automatically. Reward expectancy increased with the average amount of associated reward (low < partial < high). Reward uncertainty was highest for the partially reward distractor color (low < partial > high). Results revealed that capture frequency was linked to reward expectancy, while capture duration additionally seemed to be influenced by uncertainty, complementing previous findings of such a dissociation in appetitive and aversive learning (Koenig, Kadel, Uengoer, Schubö, & Lachnit, 2017; Koenig, Uengoer, & Lachnit, 2017).


PLoS ONE ◽  
2021 ◽  
Vol 16 (6) ◽  
pp. e0250763
Author(s):  
Nichole E. Scheerer ◽  
Elina Birmingham ◽  
Troy Q. Boucher ◽  
Grace Iarocci

This study examined involuntary capture of attention, overt attention, and stimulus valence and arousal ratings, all factors that can contribute to potential attentional biases to face and train objects in children with and without autism spectrum disorder (ASD). In the visual domain, faces are particularly captivating, and are thought to have a ‘special status’ in the attentional system. Research suggests that similar attentional biases may exist for other objects of expertise (e.g. birds for bird experts), providing support for the role of exposure in attention prioritization. Autistic individuals often have circumscribed interests around certain classes of objects, such as trains, that are related to vehicles and mechanical systems. This research aimed to determine whether this propensity in autistic individuals leads to stronger attention capture by trains, and perhaps weaker attention capture by faces, than what would be expected in non-autistic children. In Experiment 1, autistic children (6–14 years old) and age- and IQ-matched non-autistic children performed a visual search task where they manually indicated whether a target butterfly appeared amongst an array of face, train, and neutral distractors while their eye-movements were tracked. Autistic children were no less susceptible to attention capture by faces than non-autistic children. Overall, for both groups, trains captured attention more strongly than face stimuli and, trains had a larger effect on overt attention to the target stimuli, relative to face distractors. In Experiment 2, a new group of children (autistic and non-autistic) rated train stimuli as more interesting and exciting than the face stimuli, with no differences between groups. These results suggest that: (1) other objects (trains) can capture attention in a similar manner as faces, in both autistic and non-autistic children (2) attention capture is driven partly by voluntary attentional processes related to personal interest or affective responses to the stimuli.


2021 ◽  
Vol 10 (1) ◽  
pp. 17-39
Author(s):  
Ruth S. Ogden ◽  
Frederieke Turner ◽  
Ralph Pawling

Abstract Cognitive models of time perception propose that perceived duration is influenced by how quickly attention is orientated to the to-be-timed event and how consistently attention is sustained on the to-be-timed event throughout its presentation. Insufficient attention to time is therefore associated with shorter more variable representations of duration. However, these models do not specify whether covert or overt attentional systems are primarily responsible for paying attention during timing. The current study sought to establish the role of overt attention allocation during timing by examining the relationship between eye movements and perceived duration. Participants completed a modified spatial cueing task in which they estimated the duration of short (1400 ms) and long (2100 ms) validly and invalidly cued targets. Time to first fixation and dwell time were recorded throughout. The results showed no significant relationship between overt sustained attention and mean duration estimates. Reductions in overt sustained attention were however associated with increases in estimate variability for the long target duration. Overt attention orientation latency was predictive of the difference in the perceived duration of validly and invalidly cued short targets but not long ones. The results suggest that overt attention allocation may have limited impact on perceived duration.


2021 ◽  
Author(s):  
Yoko Urano ◽  
Aaron Kurosu ◽  
Gregory Henselman-Petrusek ◽  
Alexander Todorov

Here we examine an untested assumption among graphic designers that a concept called “visual hierarchy” is tied to theperception of good design. Visual hierarchy refers to the sequence in which graphic elements in a design are seen. From adesign perspective, a stronger visual hierarchy means that a graphic design, such as a poster, will lead to more similar eyemovements among its audience. From a psychology perspective, stronger visual hierarchy may mean that information forguiding overt attention is being more effectively communicated. The consequent cognitive ease may facilitate an aestheticexperience, thereby explaining how visual hierarchy could be linked to perceptions of good design. In an empirical test, wesee that when people agree that a design is good, their eye movements are more likely to be synchronous.


2021 ◽  
Vol 2 ◽  
Author(s):  
Katharina Lingelbach ◽  
Alexander M. Dreyer ◽  
Isabel Schöllhorn ◽  
Michael Bui ◽  
Michael Weng ◽  
...  

Objective and Background: Decades of research in the field of steady-state visual evoked potentials (SSVEPs) have revealed great potential of rhythmic light stimulation for brain–computer interfaces. Additionally, rhythmic light stimulation provides a non-invasive method for entrainment of oscillatory activity in the brain. Especially effective protocols enabling non-perceptible rhythmic stimulation and, thereby, reducing eye fatigue and user discomfort are favorable. Here, we investigate effects of (1) perceptible and (2) non-perceptible rhythmic light stimulation as well as attention-based effects of the stimulation by asking participants to focus (a) on the stimulation source directly in an overt attention condition or (b) on a cross-hair below the stimulation source in a covert attention condition.Method: SSVEPs at 10 Hz were evoked with a light-emitting diode (LED) driven by frequency-modulated signals and amplitudes of the current intensity either below or above a previously estimated individual threshold. Furthermore, we explored the effect of attention by asking participants to fixate on the LED directly in the overt attention condition and indirectly attend it in the covert attention condition. By measuring electroencephalography, we analyzed differences between conditions regarding the detection of reliable SSVEPs via the signal-to-noise ratio (SNR) and functional connectivity in occipito-frontal(-central) regions.Results: We could observe SSVEPs at 10 Hz for the perceptible and non-perceptible rhythmic light stimulation not only in the overt but also in the covert attention condition. The SNR and SSVEP amplitudes did not differ between the conditions and SNR values were in all except one participant above significance thresholds suggested by previous literature indicating reliable SSVEP responses. No difference between the conditions could be observed in the functional connectivity in occipito-frontal(-central) regions.Conclusion: The finding of robust SSVEPs even for non-intrusive rhythmic stimulation protocols below an individual perceptibility threshold and without direct fixation on the stimulation source reveals strong potential as a safe stimulation method for oscillatory entrainment in naturalistic applications.


2021 ◽  
Author(s):  
Candace Elise Peacock ◽  
Deborah A Cronin ◽  
Taylor R. Hayes ◽  
John M. Henderson

How do spatial constraints and meaningful scene regions interact to control overt attention during visual search for objects in real-world scenes? To answer this question, we combined novel surface maps of the likely locations of target objects with maps of the spatial distribution of scene semantic content. The surface maps captured likely target surfaces as continuous probabilities. Meaning was represented by meaning maps highlighting the distribution of semantic content in local scene regions and objects. Attention was indexed by eye movements during search for target objects that varied in the likelihood they would appear on specific surfaces. The interaction between surface maps and meaning maps was analyzed to test whether fixations were directed to meaningful scene regions on target-related surfaces. Overall, meaningful scene regions were more likely to be fixated if they appeared on target-related surfaces than if they appeared on target-unrelated surfaces. These findings suggest that the visual system prioritizes meaningful scene regions on target-related surfaces during visual search in scenes.


Sign in / Sign up

Export Citation Format

Share Document