scholarly journals Attentional Orienting in Front and Rear Spaces in a Virtual Reality Discrimination Task

Vision ◽  
2022 ◽  
Vol 6 (1) ◽  
pp. 3
Author(s):  
Rébaï Soret ◽  
Pom Charras ◽  
Christophe Hurter ◽  
Vsevolod Peysakhovich

Recent studies on covert attention suggested that the visual processing of information in front of us is different, depending on whether the information is present in front of us or if it is a reflection of information behind us (mirror information). This difference in processing suggests that we have different processes for directing our attention to objects in front of us (front space) or behind us (rear space). In this study, we investigated the effects of attentional orienting in front and rear space consecutive of visual or auditory endogenous cues. Twenty-one participants performed a modified version of the Posner paradigm in virtual reality during a spaceship discrimination task. An eye tracker integrated into the virtual reality headset was used to make sure that the participants did not move their eyes and used their covert attention. The results show that informative cues produced faster response times than non-informative cues but no impact on target identification was observed. In addition, we observed faster response times when the target occurred in front space rather than in rear space. These results are consistent with an orienting cognitive process differentiation in the front and rear spaces. Several explanations are discussed. No effect was found on subjects’ eye movements, suggesting that participants did not use their overt attention to improve task performance.

2004 ◽  
Vol 63 (3) ◽  
pp. 143-149 ◽  
Author(s):  
Fred W. Mast ◽  
Charles M. Oman

The role of top-down processing on the horizontal-vertical line length illusion was examined by means of an ambiguous room with dual visual verticals. In one of the test conditions, the subjects were cued to one of the two verticals and were instructed to cognitively reassign the apparent vertical to the cued orientation. When they have mentally adjusted their perception, two lines in a plus sign configuration appeared and the subjects had to evaluate which line was longer. The results showed that the line length appeared longer when it was aligned with the direction of the vertical currently perceived by the subject. This study provides a demonstration that top-down processing influences lower level visual processing mechanisms. In another test condition, the subjects had all perceptual cues available and the influence was even stronger.


2020 ◽  
Author(s):  
B R Geib ◽  
R Cabeza ◽  
M G Woldorff

Abstract While it is broadly accepted that attention modulates memory, the contribution of specific rapid attentional processes to successful encoding is largely unknown. To investigate this issue, we leveraged the high temporal resolution of electroencephalographic recordings to directly link a cascade of visuo-attentional neural processes to successful encoding: namely (1) the N2pc (peaking ~200 ms), which reflects stimulus-specific attentional orienting and allocation, (2) the sustained posterior-contralateral negativity (post-N2pc), which has been associated with sustained visual processing, (3) the contralateral reduction in oscillatory alpha power (contralateral reduction in alpha > 200 ms), which has also been independently related to attentionally sustained visual processing. Each of these visuo-attentional processes was robustly predictive of successful encoding, and, moreover, each enhanced memory independently of the classic, longer-latency, conceptually related, difference-due-to memory (Dm) effect. Early latency midfrontal theta power also promoted successful encoding, with at least part of this influence being mediated by the later latency Dm effect. These findings markedly expand current knowledge by helping to elucidate the intimate relationship between attentional modulations of perceptual processing and effective encoding for later memory retrieval.


Author(s):  
José Manuel Rodríguez-Ferrer

We have studied the effects of normal aging on visual attention. Have participated a group of 38 healthy elderly people with an average age of 67.8 years and a group of 39 healthy young people with average age of 19.2 years. In a first experiment of visual detection, response times were recorded, with and without covert attention, to the presentation of stimuli (0.5º in diameter grey circles) appearing in three eccentricities (2.15, 3.83 and 5.53° of visual field) and with three levels of contrast (6, 16 and 78%). In a second experiment of visual form discrimination circles and squares with the same features as in the previous experiment were presented, but in this case subjects only should respond to the emergence of the circles. In both age groups, the covert attention reduced response times. Compared to young people, the older group achieved better results in some aspects of attention tests and response times were reduced more in the stimuli of greater eccentricity. The data suggest that there is a mechanism of adaptation in aging, in which visual attention especially favors the perception of those stimuli more difficult to detec


Electronics ◽  
2021 ◽  
Vol 10 (9) ◽  
pp. 1051
Author(s):  
Si Jung Kim ◽  
Teemu H. Laine ◽  
Hae Jung Suk

Presence refers to the emotional state of users where their motivation for thinking and acting arises based on the perception of the entities in a virtual world. The immersion level of users can vary when they interact with different media content, which may result in different levels of presence especially in a virtual reality (VR) environment. This study investigates how user characteristics, such as gender, immersion level, and emotional valence on VR, are related to the three elements of presence effects (attention, enjoyment, and memory). A VR story was created and used as an immersive stimulus in an experiment, which was presented through a head-mounted display (HMD) equipped with an eye tracker that collected the participants’ eye gaze data during the experiment. A total of 53 university students (26 females, 27 males), with an age range from 20 to 29 years old (mean 23.8), participated in the experiment. A set of pre- and post-questionnaires were used as a subjective measure to support the evidence of relationships among the presence effects and user characteristics. The results showed that user characteristics, such as gender, immersion level, and emotional valence, affected their level of presence, however, there is no evidence that attention is associated with enjoyment or memory.


1983 ◽  
Vol 27 (5) ◽  
pp. 354-354
Author(s):  
Bruce W. Hamill ◽  
Robert A. Virzi

This investigation addresses the problem of attention in the processing of symbolic information from visual displays. Its scope includes the nature of attentive processes, the structural properties of stimuli that influence visual information processing mechanisms, and the manner in which these factors interact in perception. Our purpose is to determine the effects of configural feature structure on visual information processing. It is known that for stimuli comprising separable features, one can distinguish between conditions in which only one relevant feature differs among stimuli in the array being searched and conditions in which conjunctions of two (or more) features differ: Since the visual process of conjoining separable features is additive, this fact is reflected in search time as a function of array size, with feature conditions yielding flat curves associated with parallel search (no increase in search time across array sizes) and conjunction conditions yielding linearly increasing curves associated with serial search. We studied configural-feature stimuli within this framework to determine the nature of visual processing for such stimuli as a function of their feature structure. Response times of subjects searching for particular targets among structured arrays of distractors were measured in a speeded visual search task. Two different sets of stimulus materials were studied in array sizes of up to 32 stimuli, using both tachistoscope and microcomputer-based CRT presentation for each. Our results with configural stimuli indicate serial search in all of the conditions, with the slope of the response-time-by-array-size function being steeper for conjunction conditions than for feature conditions. However, for each of the two sets of stimuli we studied, there was one configuration that stood apart from the others in its set in that it yielded significantly faster response times, and in that conjunction conditions involving these particular stimuli tended to cluster with the feature conditions rather than with the other conjunction conditions. In addition to these major effects of particular targets, context effects also appeared in our results as effects of the various distractor sets used; certain of these context effects appear to be reversible. The effects of distractor sets on target search were studied in considerable detail. We have found interesting differences in visual processing between stimuli comprising separable features and those comprising configural features. We have also been able to characterize the effects we have found with configural-feature stimuli as being related to the specific feature structure of the target stimulus in the context of the specific feature structure of distractor stimuli. These findings have strong implications for the design of symbology that can enhance visual performance in the use of automated displays.


2021 ◽  
pp. 1-18
Author(s):  
Sicong Liu ◽  
Jillian M. Clements ◽  
Elayna P. Kirsch ◽  
Hrishikesh M. Rao ◽  
David J. Zielinski ◽  
...  

Abstract The fusion of immersive virtual reality, kinematic movement tracking, and EEG offers a powerful test bed for naturalistic neuroscience research. Here, we combined these elements to investigate the neuro-behavioral mechanisms underlying precision visual–motor control as 20 participants completed a three-visit, visual–motor, coincidence-anticipation task, modeled after Olympic Trap Shooting and performed in immersive and interactive virtual reality. Analyses of the kinematic metrics demonstrated learning of more efficient movements with significantly faster hand RTs, earlier trigger response times, and higher spatial precision, leading to an average of 13% improvement in shot scores across the visits. As revealed through spectral and time-locked analyses of the EEG beta band (13–30 Hz), power measured prior to target launch and visual-evoked potential amplitudes measured immediately after the target launch correlate with subsequent reactive kinematic performance in the shooting task. Moreover, both launch-locked and shot/feedback-locked visual-evoked potentials became earlier and more negative with practice, pointing to neural mechanisms that may contribute to the development of visual–motor proficiency. Collectively, these findings illustrate EEG and kinematic biomarkers of precision motor control and changes in the neurophysiological substrates that may underlie motor learning.


2020 ◽  
Author(s):  
Arkady Zgonnikov ◽  
David Abbink ◽  
Gustav Markkula

Laboratory studies of abstract, highly controlled tasks point towards noisy evidence accumulation as a key mechanism governing decision making. Yet it is unclear whether the cognitive processes implicated in simple, isolated decisions in the lab are as paramount to decisions that are ingrained in more complex behaviors, such as driving. Here we aim to address the gap between modern cognitive models of decision making and studies of naturalistic decision making in drivers, which so far have provided only limited insight into the underlying cognitive processes. We investigate drivers' decision making during unprotected left turns, and model the cognitive process driving these decisions. Our model builds on the classical drift-diffusion model, and emphasizes, first, the drift rate linked to the relevant perceptual quantities dynamically sampled from the environment, and, second, collapsing decision boundaries reflecting the dynamic constraints imposed on the decision maker’s response by the environment. We show that the model explains the observed decision outcomes and response times, as well as substantial individual differences in those. Through cross-validation, we demonstrate that the model not only explains the data, but also generalizes to out-of-sample conditions, effectively providing a way to predict human drivers’ behavior in real time. Our results reveal the cognitive mechanisms of gap acceptance decisions in human drivers, and exemplify how simple cognitive process models can help us to understand human behavior in complex real-world tasks.


2018 ◽  
Vol 8 (2) ◽  
pp. 80-89
Author(s):  
Selene Cansino

The aim of this study was to determine the effects of endogenous and exogenous orienting of attention on episodic memory. Thirty healthy participants performed a cueing attention paradigm during encoding, in which images of common objects were presented either to the left or to the right of the center of the screen. Before the presentation of each image, three types of symbolic cues were displayed to indicate the location in which the stimuli would appear: valid cues to elicit endogenous orientation, invalid cues to prompt exogenous orientation and neutral or uncued trials. The participants’ task was to discriminate whether the images were symmetrical or not while fixating on the center of the screen to assure the manifestation of only covert attention mechanisms. Covert attention refers to the ability to orient attention by means of central control mechanisms alone, without head and eye movements. Trials with eye movements were excluded after inspection of eye-tracker recordings that were conducted throughout the task. During retrieval, participants conducted a source memory task in which they indicated the location where the images were presented during encoding. Memory for spatial context was superior during endogenous orientation than during exogenous orientation, whereas exogenous orientation was associated with a greater number of missed responses compared to the neutral trials. The formation of episodic memory representations with contextual details benefits from endogenous attention.


2020 ◽  
Author(s):  
Luiza Kirasirova ◽  
Vladimir Bulanov ◽  
Alexei Ossadtchi ◽  
Alexander Kolsanov ◽  
Vasily Pyatin ◽  
...  

AbstractA P300 brain-computer interface (BCI) is a paradigm, where text characters are decoded from visual evoked potentials (VEPs). In a popular implementation, called P300 speller, a subject looks at a display where characters are flashing and selects one character by attending to it. The selection is recognized by the strongest VEP. The speller performs well when cortical responses to target and non-target stimuli are sufficiently different. Although many strategies have been proposed for improving the spelling, a relatively simple one received insufficient attention in the literature: reduction of the visual field to diminish the contribution from non-target stimuli. Previously, this idea was implemented in a single-stimulus switch that issued an urgent command. To tackle this approach further, we ran a pilot experiment where ten subjects first operated a traditional P300 speller and then wore a binocular aperture that confined their sight to the central visual field. Visual field restriction resulted in a reduction of non-target responses in all subjects. Moreover, in four subjects, target-related VEPs became more distinct. We suggest that this approach could speed up BCI operations and reduce user fatigue. Additionally, instead of wearing an aperture, non-targets could be removed algorithmically or with a hybrid interface that utilizes an eye tracker. We further discuss how a P300 speller could be improved by taking advantage of the different physiological properties of the central and peripheral vision. Finally, we suggest that the proposed experimental approach could be used in basic research on the mechanisms of visual processing.


2016 ◽  
Vol 4 (2) ◽  
pp. 187-206 ◽  
Author(s):  
Trevor B. Penney ◽  
Xiaoqin Cheng ◽  
Yan Ling Leow ◽  
Audrey Wei Ying Bay ◽  
Esther Wu ◽  
...  

A transient suppression of visual perception during saccades ensures perceptual stability. In two experiments, we examined whether saccades affect time perception of visual and auditory stimuli in the seconds range. Specifically, participants completed a duration reproduction task in which they memorized the duration of a 6 s timing signal during the training phase and later reproduced that duration during the test phase. Four experimental conditions differed in saccade requirements and the presence or absence of a secondary discrimination task during the test phase. For both visual and auditory timing signals, participants reproduced longer durations when the secondary discrimination task required saccades to be made (i.e., overt attention shift) during reproduction as compared to when the discrimination task merely required fixation at screen center. Moreover, greater total saccade duration in a trial resulted in greater time distortion. However, in the visual modality, requiring participants to covertly shift attention (i.e., no saccade) to complete the discrimination task increased reproduced duration as much as making a saccade, whereas in the auditory modality making a saccade increased reproduced duration more than making a covert attention shift. In addition, we examined microsaccades in the conditions that did not require full saccades for both the visual and auditory experiments. Greater total microsaccade duration in a trial resulted in greater time distortion in both modalities. Taken together, the experiments suggest that saccades and microsaccades affect seconds range visual and auditory interval timing via attention and saccadic suppression mechanisms.


Sign in / Sign up

Export Citation Format

Share Document