visually evoked potentials
Recently Published Documents


TOTAL DOCUMENTS

213
(FIVE YEARS 33)

H-INDEX

28
(FIVE YEARS 2)

2022 ◽  
Author(s):  
Kumari Liza ◽  
Supratim Ray

Steady-state visually evoked potentials (SSVEP) are widely used to index top-down cognitive processing in human electroencephalogram (EEG) studies. Typically, two stimuli flickering at different temporal frequencies (TFs) are presented, each producing a distinct response in the EEG at its flicker frequency. However, how SSVEP responses in EEG are modulated in the presence of a competing flickering stimulus just due to sensory interactions is not well understood. We have previously shown in local field potentials (LFP) recorded from awake monkeys that when two overlapping full screen gratings are counter-phased at different TFs, there is an asymmetric SSVEP response suppression, with greater suppression from lower TFs, which further depends on the relative orientations of the gratings (stronger suppression and asymmetry for parallel compared to orthogonal gratings). Here, we first confirmed these effects in both male and female human EEG recordings. Then, we mapped the response suppression of one stimulus (target) by a competing stimulus (mask) over a much wider range than the previous study. Surprisingly, we found that the suppression was not stronger at low frequencies in general, but systematically varied depending on the target TF, indicating local interactions between the two competing stimuli. These results were confirmed in both human EEG and monkey LFP and electrocorticogram (ECoG) data. Our results show that sensory interactions between multiple SSVEPs are more complex than shown previously and are influenced by both local and global factors, underscoring the need to cautiously interpret the results of studies involving SSVEP paradigms.


2021 ◽  
Vol 11 (24) ◽  
pp. 11882
Author(s):  
Carl Böck ◽  
Lea Meier ◽  
Stephan Kalb ◽  
Milan R. Vosko ◽  
Thomas Tschoellitsch ◽  
...  

Visually evoked potentials (VEPs) are widely used for diagnoses of different neurological diseases. Interestingly, there is limited research about the impact of the stimulus color onto the evoked response. Therefore, in our study we investigated the possibility of automatically classifying the stimulus color. The visual stimuli were selected to be red/black and green/black checkerboard patterns with equal light density. Both of these stimuli were presented in a random manner to nine subjects, while the electroencephalogram was recorded at the occipital lobe. After pre-processing and aligning the evoked potentials, an artificial neural network with one hidden layer was used to investigate the general possibility to automatically classify the stimulus color in three different settings. First, color classification with individually trained models, color classification with a common model, and color classification for each individual volunteer with a model trained on the data of the remaining subjects. With an average accuracy (ACC) of 0.83, the best results were achieved for the individually trained model. Also, the second (mean ACC = 0.76) and third experiments (mean ACC = 0.71) indicated a reasonable predictive accuracy across all subjects. Consequently, machine learning tools are able to appropriately classify stimuli colors based on VEPs. Although further studies are needed to improve the classification performance of our approach, this opens new fields of applications for VEPs.


2021 ◽  
Vol 10 (1) ◽  
pp. 33
Author(s):  
Alessandro Cultrera ◽  
Pasquale Arpaia ◽  
Luca Callegaro ◽  
Antonio Esposito ◽  
Massimo Ortolano

Off-the-shelf consumer-grade smart glasses are being increasingly used in extended reality and brain–computer interface applications that are based on the detection of visually evoked potentials from the user’s brain. The displays of these kinds of devices can be based on different technologies, which may affect the nature of the visual stimulus received by the user. This aspect has substantial impact in the field of applications based on wearable sensors and devices. We measured the optical output of three models of smart glasses with different display technologies using a photo-transducer in order to gain insight on their exploitability in brain–computer interface applications. The results suggest that preferring a particular model of smart glasses may strongly depend on the specific application requirements.


2021 ◽  
Author(s):  
Henrique L. V. Giuliani ◽  
Patrick O. de Paula ◽  
Diogo C. Soriano ◽  
Ricardo Suyama ◽  
Denis G. Fantinato

Different approaches have been investigated to implement effective Brain-Computer Interfaces (BCI), translating brain activation patterns into commands to external devices. BCI exploring Steady-State Visually Evoked Potentials usually achieve relatively high accuracy, when considering 2-3 second sample windows, but the performance degrades for smaller windows. So, we investigate the use of an ensemble method, the Adaboost algorithm, combining two different structures, the Logistic Regressor and the Multilayer Perceptron, whose diversity shall bring relevant information for more accurate classification. Simulation results indicate that the proposed method can improve performance for smaller windows, being a promising alternative to reduce model variance.


2021 ◽  
Vol 22 (13) ◽  
pp. 6732
Author(s):  
Kitako Tabata ◽  
Eriko Sugano ◽  
Akito Hatakeyama ◽  
Yoshito Watanabe ◽  
Tomoya Suzuki ◽  
...  

The death of photoreceptor cells is induced by continuous light exposure. However, it is unclear whether light damage was induced in retinal ganglion cells with photosensitivity by transduction of optogenetic genes. In this study, we evaluated the phototoxicities of continuous light exposure on retinal ganglion cells after transduction of the optogenetic gene mVChR1 using an adeno-associated virus vector. Rats were exposed to continuous light for a week, and visually evoked potentials (VEPs) were recorded. The intensities of continuous light (500, 1000, 3000, and 5000 lx) increased substantially after VEP recordings. After the final recording of VEPs, retinal ganglion cells (RGCs) were retrogradely labeled with a fluorescein tracer, FluoroGold, and the number of retinal ganglion cells was counted under a fluorescent microscope. There was no significant reduction in the amplitudes of VEPs and the number of RGCs after exposure to any light intensity. These results indicated that RGCs were photosensitive after the transduction of optogenetic genes and did not induce any phototoxicity by continuous light exposure.


Electronics ◽  
2021 ◽  
Vol 10 (10) ◽  
pp. 1158
Author(s):  
Seung-Min Park ◽  
Hong-Gi Yeom ◽  
Kwee-Bo Sim

The brain–computer interface (BCI) is a promising technology where a user controls a robot or computer by thinking with no movement. There are several underlying principles to implement BCI, such as sensorimotor rhythms, P300, steady-state visually evoked potentials, and directional tuning. Generally, different principles are applied to BCI depending on the application, because strengths and weaknesses vary according to each BCI method. Therefore, BCI should be able to predict a user state to apply suitable principles to the system. This study measured electroencephalography signals in four states (resting, speech imagery, leg-motor imagery, and hand-motor imagery) from 10 healthy subjects. Mutual information from 64 channels was calculated as brain connectivity. We used a convolutional neural network to predict a user state, where brain connectivity was the network input. We applied five-fold cross-validation to evaluate the proposed method. Mean accuracy for user state classification was 88.25 ± 2.34%. This implies that the system can change the BCI principle using brain connectivity. Thus, a BCI user can control various applications according to their intentions.


Author(s):  
Peter S. B. Finnie ◽  
Robert W. Komorowski ◽  
Mark F. Bear

AbstractThe hippocampus and neocortex are theorized to be crucial partners in the formation of long-term memories. Here, we assess hippocampal involvement in two related forms of experience-dependent plasticity in the primary visual cortex (V1) of mice. Like control animals, those with hippocampal lesions exhibit potentiation of visually evoked potentials following passive daily exposure to a phase reversing oriented grating stimulus, which is accompanied by long-term habituation of a reflexive behavioral response. Thus, low-level recognition memory is formed independently of the hippocampus. However, response potentiation resulting from daily exposure to a fixed sequence of four oriented gratings is severely impaired in mice with hippocampal damage. A feature of sequence plasticity in V1 of controls, but absent in lesioned mice, is generation of predictive responses to an anticipated stimulus element when it is withheld or delayed. Thus, hippocampus is involved in encoding temporally structured experience, even in primary sensory cortex.


2021 ◽  
pp. 1-30
Author(s):  
Kirsten C. S. Adam ◽  
Lillian Chang ◽  
Nicole Rangan ◽  
John T. Serences

Feature-based attention is the ability to selectively attend to a particular feature (e.g., attend to red but not green items while looking for the ketchup bottle in your refrigerator), and steady-state visually evoked potentials (SSVEPs) measured from the human EEG signal have been used to track the neural deployment of feature-based attention. Although many published studies suggest that we can use trial-by-trial cues to enhance relevant feature information (i.e., greater SSVEP response to the cued color), there is ongoing debate about whether participants may likewise use trial-by-trial cues to voluntarily ignore a particular feature. Here, we report the results of a preregistered study in which participants either were cued to attend or to ignore a color. Counter to prior work, we found no attention-related modulation of the SSVEP response in either cue condition. However, positive control analyses revealed that participants paid some degree of attention to the cued color (i.e., we observed a greater P300 component to targets in the attended vs. the unattended color). In light of these unexpected null results, we conducted a focused review of methodological considerations for studies of feature-based attention using SSVEPs. In the review, we quantify potentially important stimulus parameters that have been used in the past (e.g., stimulation frequency, trial counts) and we discuss the potential importance of these and other task factors (e.g., feature-based priming) for SSVEP studies.


Sign in / Sign up

Export Citation Format

Share Document