scholarly journals A BCI-Based Study on the Relationship Between the SSVEP and Retinal Eccentricity in Overt and Covert Attention

2021 ◽  
Vol 15 ◽  
Author(s):  
Yajun Zhou ◽  
Li Hu ◽  
Tianyou Yu ◽  
Yuanqing Li

Covert attention aids us in monitoring the environment and optimizing performance in visual tasks. Past behavioral studies have shown that covert attention can enhance spatial resolution. However, electroencephalography (EEG) activity related to neural processing between central and peripheral vision has not been systematically investigated. Here, we conducted an EEG study with 25 subjects who performed covert attentional tasks at different retinal eccentricities ranging from 0.75° to 13.90°, as well as tasks involving overt attention and no attention. EEG signals were recorded with a single stimulus frequency to evoke steady-state visual evoked potentials (SSVEPs) for attention evaluation. We found that the SSVEP response in fixating at the attended location was generally negatively correlated with stimulus eccentricity as characterized by Euclidean distance or horizontal and vertical distance. Moreover, more pronounced characteristics of SSVEP analysis were also acquired in overt attention than in covert attention. Furthermore, offline classification of overt attention, covert attention, and no attention yielded an average accuracy of 91.42%. This work contributes to our understanding of the SSVEP representation of attention in humans and may also lead to brain-computer interfaces (BCIs) that allow people to communicate with choices simply by shifting their attention to them.

2019 ◽  
Vol 12 (3) ◽  
Author(s):  
Samuel Tuhkanen ◽  
Jami Pekkanen ◽  
Esko Lehtonen ◽  
Otto Lappi

In complex dynamic tasks such as driving it is essential to be aware of potentially important targets in peripheral vision. While eye tracking methods in various driving tasks have provided much information about drivers’ gaze strategies, these methods only inform about overt attention and provide limited grounds to assess hypotheses concerning covert attention. We adapted the Posner cue paradigm to a dynamic steering task in a driving simulator. The participants were instructed to report the presence of peripheral targets while their gaze was fixed to the road. We aimed to see whether and how the active steering task and complex visual stimulus might affect directing covert attention to the visual periphery. In a control condition, the detection task was performed without a visual scene and active steering. Detection performance in bends was better in the control task compared to corresponding performance in the steering task, indicating that active steering and the complex visual scene affected the ability to distribute covert attention. Lower targets were discriminated slower than targets at the level of the fixation circle in both conditions. We did not observe higher discriminability for on-road targets. The results may be accounted for by either bottom-up optic flow biasing of attention, or top-down saccade planning.


2015 ◽  
Vol 114 (5) ◽  
pp. 2637-2648 ◽  
Author(s):  
Fabrice Arcizet ◽  
Koorosh Mirpour ◽  
Daniel J. Foster ◽  
Caroline J. Charpentier ◽  
James W. Bisley

When looking around at the world, we can only attend to a limited number of locations. The lateral intraparietal area (LIP) is thought to play a role in guiding both covert attention and eye movements. In this study, we tested the involvement of LIP in both mechanisms with a change detection task. In the task, animals had to indicate whether an element changed during a blank in the trial by making a saccade to it. If no element changed, they had to maintain fixation. We examine how the animal's behavior is biased based on LIP activity prior to the presentation of the stimulus the animal must respond to. When the activity was high, the animal was more likely to make an eye movement toward the stimulus, even if there was no change; when the activity was low, the animal either had a slower reaction time or maintained fixation, even if a change occurred. We conclude that LIP activity is involved in both covert and overt attention, but when decisions about eye movements are to be made, this role takes precedence over guiding covert attention.


2017 ◽  
Vol 27 (08) ◽  
pp. 1750033 ◽  
Author(s):  
Alborz Rezazadeh Sereshkeh ◽  
Robert Trott ◽  
Aurélien Bricout ◽  
Tom Chau

Brain–computer interfaces (BCIs) for communication can be nonintuitive, often requiring the performance of hand motor imagery or some other conversation-irrelevant task. In this paper, electroencephalography (EEG) was used to develop two intuitive online BCIs based solely on covert speech. The goal of the first BCI was to differentiate between 10[Formula: see text]s of mental repetitions of the word “no” and an equivalent duration of unconstrained rest. The second BCI was designed to discern between 10[Formula: see text]s each of covert repetition of the words “yes” and “no”. Twelve participants used these two BCIs to answer yes or no questions. Each participant completed four sessions, comprising two offline training sessions and two online sessions, one for testing each of the BCIs. With a support vector machine and a combination of spectral and time-frequency features, an average accuracy of [Formula: see text] was reached across participants in the online classification of no versus rest, with 10 out of 12 participants surpassing the chance level (60.0% for [Formula: see text]). The online classification of yes versus no yielded an average accuracy of [Formula: see text], with eight participants exceeding the chance level. Task-specific changes in EEG beta and gamma power in language-related brain areas tended to provide discriminatory information. To our knowledge, this is the first report of online EEG classification of covert speech. Our findings support further study of covert speech as a BCI activation task, potentially leading to the development of more intuitive BCIs for communication.


2018 ◽  
Vol 2018 ◽  
pp. 1-10 ◽  
Author(s):  
Eric Fujiwara ◽  
Carlos Kenichi Suzuki

A low-cost optical fiber force myography sensor for noninvasive hand posture identification is proposed. The transducers are comprised of 10 mm periodicity silica multimode fiber microbending devices mounted in PVC plates, providing 0.05 N−1 sensitivity over ~20 N range. Next, the transducers were attached to the user forearm by means of straps in order to monitor the posterior proximal radial, the anterior medial ulnar, and the posterior distal radial muscles, and the acquired FMG optical signals were correlated to the performed gestures using a 5 hidden layers, 20-neuron artificial neural network classifier with backpropagation architecture, followed by a competitive layer. The overall results for 9 postures and 6 subjects indicated a 98.4% sensitivity and 99.7% average accuracy, being comparable to the electromyographic approaches. Moreover, in contrast to the current setups, the proposed methodology allows the identification of poses characterized by different configurations of fingers and wrist joint displacements with the utilization of only 3 transducers and a simple interrogation scheme, being suitable to further applications in human-computer interfaces.


2019 ◽  
Author(s):  
Peter de Lissa ◽  
Roberto Caldara ◽  
Victoria Nicholls ◽  
Sebastien Miellet

AbstractPrevious research has shown that visual attention does not always exactly follow gaze direction, leading to the concepts of overt and covert attention. However, it is not yet clear how such covert shifts of visual attention to peripheral regions impact the processing of the targets we directly foveate as they move in our visual field. The current study utilised the co-registration of eye-position and EEG recordings while participants tracked moving targets that were embedded with a 30 Hz frequency tag in a Steady State Visually Evoked Potentials (SSVEP) paradigm. When the task required attention to be divided between the moving target (overt attention) and a peripheral region where a second target might appear (covert attention), the SSVEPs elicited by the tracked target at the 30 Hz frequency band were significantly lower than when participants did not have to covertly monitor for a second target. Our findings suggest that neural responses of overt attention are reduced when attention is divided between covert and overt areas. This neural evidence is in line with theoretical accounts describing attention as a pool of finite resources, such as the perceptual load theory. Altogether, these results have practical implications for many real-world situations where covert shifts of attention may reduce visual processing of objects even when they are directly being tracked with the eyes.


2020 ◽  
Author(s):  
Joseph MacInnes ◽  
Ómar I. Jóhannesson ◽  
Andrey Chetverikov ◽  
Arni Kristjansson

We move our eyes roughly three times every second while searching complex scenes, but covert attention helps to guide where we allocate those overt fixations. Covert attention may be allocated reflexively or voluntarily, and speeds the rate of information processing at the attended location. Reducing access to covert attention hinders performance, but it is not known to what degree the locus of covert attention is tied to the current gaze position. We compared visual search performance in a traditional gaze contingent display with a second task where a similarly sized contingent window is controlled with a mouse allowing a covert aperture to be controlled independently from overt gaze. Larger apertures improved performance for both mouse and gaze contingent trials suggesting that covert attention was beneficial regardless of control type. We also found evidence that participants used the mouse controlled aperture independently of gaze position, suggesting that participants attempted to untether their covert and overt attention when possible. This untethering manipulation, however, resulted in an overall cost to search performance, a result at odds with previous results in a change blindness paradigm. Untethering covert and overt attention may therefore have costs or benefits depending on the task demands in each case.


Author(s):  
Benjamin Wolfe ◽  
Ben D. Sawyer ◽  
Ruth Rosenholtz

Objective The aim of this study is to describe information acquisition theory, explaining how drivers acquire and represent the information they need. Background While questions of what drivers are aware of underlie many questions in driver behavior, existing theories do not directly address how drivers in particular and observers in general acquire visual information. Understanding the mechanisms of information acquisition is necessary to build predictive models of drivers’ representation of the world and can be applied beyond driving to a wide variety of visual tasks. Method We describe our theory of information acquisition, looking to questions in driver behavior and results from vision science research that speak to its constituent elements. We focus on the intersection of peripheral vision, visual attention, and eye movement planning and identify how an understanding of these visual mechanisms and processes in the context of information acquisition can inform more complete models of driver knowledge and state. Results We set forth our theory of information acquisition, describing the gap in understanding that it fills and how existing questions in this space can be better understood using it. Conclusion Information acquisition theory provides a new and powerful way to study, model, and predict what drivers know about the world, reflecting our current understanding of visual mechanisms and enabling new theories, models, and applications. Application Using information acquisition theory to understand how drivers acquire, lose, and update their representation of the environment will aid development of driver assistance systems, semiautonomous vehicles, and road safety overall.


2004 ◽  
Vol 91 (6) ◽  
pp. 2590-2597 ◽  
Author(s):  
Dennis T. T. Plachta ◽  
Jiakun Song ◽  
Michele B. Halvorsen ◽  
Arthur N. Popper

Many species of odontocete cetaceans (toothed whales) use high-frequency clicks (60–170 kHz) to identify objects in their environment, including potential prey. Behavioral studies have shown that American shad, Alosa sapidissima, can detect ultrasonic signals similar to those of odontocetes that are potentially their predators. American shad also show strong escape behavior in response to ultrasonic pulses between 70 and 110 kHz and can determine the location of the sound source at least in the horizontal plane. The present study examines physiological aspects of ultrasound detection by American shad and provides the first insights into the neural encoding of ultrasound signals in any nonmammalian vertebrate. The recordings were obtained by penetration through the cerebellar surface. All but two units responded exclusively to ultrasound. Ultrasound-sensitive units did not phase-couple to any stimulus frequency. Some units resembled the response of constant latency neurons found in the ventral nucleus of the lateral lemniscus of bats. We suggest that ultrasonic and sonic signals are processed along different pathways in Alosa. The ultrasonic pathway in Alosa appears to be a feature detector that is likely to be adapted (e.g., frequency, intensity) to odontocete echolocation signals.


Author(s):  
Paula Soriano-Segura ◽  
Eduardo Iáñez ◽  
Mario Ortiz ◽  
Vicente Quiles ◽  
José M. Azorín

Brain–Computer Interfaces (BCIs) are becoming an important technological tool for the rehabilitation process of patients with locomotor problems, due to their ability to recover the connection between brain and limbs by promoting neural plasticity. They can be used as assistive devices to improve the mobility of handicapped people. For this reason, current BCIs have to be improved to allow an accurate and natural use of external devices. This work proposes a novel methodology for the detection of the intention to change the direction during gait based on event-related desynchronization (ERD). Frequency and temporal features of the electroencephalographic (EEG) signals are characterized. Then, a selection of the most influential features and electrodes to differentiate the direction change intention from the walking is carried out. Best results are obtained when combining frequency and temporal features with an average accuracy of [Formula: see text]%, which are promising to be applied for future BCIs.


2011 ◽  
Vol 23 (5) ◽  
pp. 1148-1159 ◽  
Author(s):  
Zhong-Lin Lu ◽  
Xiangrui Li ◽  
Bosco S. Tjan ◽  
Barbara A. Dosher ◽  
Wilson Chu

On the basis of results from behavioral studies that spatial attention improves the exclusion of external noise in the target region, we predicted that attending to a spatial region would reduce the impact of external noise on the BOLD response in corresponding cortical areas, seen as reduced BOLD responses in conditions with large amounts of external noise but relatively low signal, and increased dynamic range of the BOLD response to variations in signal contrast. We found that, in the presence of external noise, covert attention reduced the trial-by-trial BOLD response by 15.5–18.9% in low signal contrast conditions in V1. It also increased the BOLD dynamic range in V1, V2, V3, V3A/B, and V4 by a factor of at least three. Overall, covert attention reduced the impact of external noise by about 73–85% in these early visual areas. It also increased the contrast gain by a factor of 2.6–3.8.


Sign in / Sign up

Export Citation Format

Share Document