scholarly journals Simultaneous Decoding of Eccentricity and Direction Information for a Single-Flicker SSVEP BCI

Electronics ◽  
2019 ◽  
Vol 8 (12) ◽  
pp. 1554
Author(s):  
Jingjing Chen ◽  
Alexander Maye ◽  
Andreas K. Engel ◽  
Yijun Wang ◽  
Xiaorong Gao ◽  
...  

The feasibility of a steady-state visual evoked potential (SSVEP) brain–computer interface (BCI) with a single-flicker stimulus for multiple-target decoding has been demonstrated in a number of recent studies. The single-flicker BCIs have mainly employed the direction information for encoding the targets, i.e., different targets are placed at different spatial directions relative to the flicker stimulus. The present study explored whether visual eccentricity information can also be used to encode targets for the purpose of increasing the number of targets in the single-flicker BCIs. A total number of 16 targets were encoded, placed at eight spatial directions, and two eccentricities (2.5° and 5°) relative to a 12 Hz flicker stimulus. Whereas distinct SSVEP topographies were elicited when participants gazed at targets of different directions, targets of different eccentricities were mainly represented by different signal-to-noise ratios (SNRs). Using a canonical correlation analysis-based classification algorithm, simultaneous decoding of both direction and eccentricity information was achieved, with an offline 16-class accuracy of 66.8 ± 16.4% averaged over 12 participants and a best individual accuracy of 90.0%. Our results demonstrate a single-flicker BCI with a substantially increased target number towards practical applications.


2021 ◽  
pp. 1-13
Author(s):  
Hamidreza Maymandi ◽  
Jorge Luis Perez Benitez ◽  
F. Gallegos-Funes ◽  
J. A. Perez Benitez


Author(s):  
Yao Li ◽  
T. Kesavadas

Abstract One of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot and sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot which locates the defective part through a second camera and removes it from the conveyor. The robot can grasp various part with our grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA is further validated in an online experiment that achieves 93% accuracy in identifying and removing a defective part.



2021 ◽  
Vol 21 (2) ◽  
pp. 1124-1138
Author(s):  
Yue Zhang ◽  
Shane Q. Xie ◽  
He Wang ◽  
Zhiqiang Zhang


Sign in / Sign up

Export Citation Format

Share Document