A study of the existing problems of estimating the information transfer rate in online brain–computer interfaces

2013 ◽  
Vol 10 (2) ◽  
pp. 026014 ◽  
Author(s):  
Peng Yuan ◽  
Xiaorong Gao ◽  
Brendan Allison ◽  
Yijun Wang ◽  
Guangyu Bin ◽  
...  
Author(s):  
Kun Chen ◽  
Fei Xu ◽  
Quan Liu ◽  
Haojie Liu ◽  
Yang Zhang ◽  
...  

Among different brain–computer interfaces (BCIs), the steady-state visual evoked potential (SSVEP)-based BCI has been widely used because of its higher signal to noise ratio (SNR) and greater information transfer rate (ITR). In this paper, a method based on multiple signal classification (MUSIC) was proposed for multidimensional SSVEP signal processing. Both fundamental and second harmonics of SSVEPs were employed for the final target recognition. The experimental results proved it has the advantage of reducing recognition time. Also, the relation between the duty-cycle of the stimulus signals and the amplitude of the second harmonics of SSVEPs was discussed via experiments. In order to verify the feasibility of proposed methods, a two-layer spelling system was designed. Different subjects including those who have never used BCIs before used the system fluently in an unshielded environment.


2020 ◽  
Vol 10 (10) ◽  
pp. 686
Author(s):  
Piotr Stawicki ◽  
Ivan Volosyak

Motion-based visual evoked potentials (mVEP) is a new emerging trend in the field of steady-state visual evoked potentials (SSVEP)-based brain–computer interfaces (BCI). In this paper, we introduce different movement-based stimulus patterns (steady-state motion visual evoked potentials—SSMVEP), without employing the typical flickering. The tested movement patterns for the visual stimuli included a pendulum-like movement, a flipping illusion, a checkerboard pulsation, checkerboard inverse arc pulsations, and reverse arc rotations, all with a spelling task consisting of 18 trials. In an online experiment with nine participants, the movement-based BCI systems were evaluated with an online four-target BCI-speller, in which each letter may be selected in three steps (three trials). For classification, the minimum energy combination and a filter bank approach were used. The following frequencies were utilized: 7.06 Hz, 7.50 Hz, 8.00 Hz, and 8.57 Hz, reaching an average accuracy between 97.22% and 100% and an average information transfer rate (ITR) between 15.42 bits/min and 33.92 bits/min. All participants successfully used the SSMVEP-based speller with all types of stimulation pattern. The most successful SSMVEP stimulus was the SSMVEP1 (pendulum-like movement), with the average results reaching 100% accuracy and 33.92 bits/min for the ITR.


Sensors ◽  
2021 ◽  
Vol 21 (4) ◽  
pp. 1256
Author(s):  
Fangkun Zhu ◽  
Lu Jiang ◽  
Guoya Dong ◽  
Xiaorong Gao ◽  
Yijun Wang

Brain-computer interfaces (BCIs) provide humans a new communication channel by encoding and decoding brain activities. Steady-state visual evoked potential (SSVEP)-based BCI stands out among many BCI paradigms because of its non-invasiveness, little user training, and high information transfer rate (ITR). However, the use of conductive gel and bulky hardware in the traditional Electroencephalogram (EEG) method hinder the application of SSVEP-based BCIs. Besides, continuous visual stimulation in long time use will lead to visual fatigue and pose a new challenge to the practical application. This study provides an open dataset, which is collected based on a wearable SSVEP-based BCI system, and comprehensively compares the SSVEP data obtained by wet and dry electrodes. The dataset consists of 8-channel EEG data from 102 healthy subjects performing a 12-target SSVEP-based BCI task. For each subject, 10 consecutive blocks were recorded using wet and dry electrodes, respectively. The dataset can be used to investigate the performance of wet and dry electrodes in SSVEP-based BCIs. Besides, the dataset provides sufficient data for developing new target identification algorithms to improve the performance of wearable SSVEP-based BCIs.


Technologies ◽  
2020 ◽  
Vol 8 (4) ◽  
pp. 63
Author(s):  
Surej Mouli ◽  
Ramaswamy Palaniappan ◽  
Emmanuel Molefi ◽  
Ian McLoughlin

Steady State Visual Evoked Potential (SSVEP) methods for brain–computer interfaces (BCI) are popular due to higher information transfer rate and easier setup with minimal training, compared to alternative methods. With precisely generated visual stimulus frequency, it is possible to translate brain signals into external actions or signals. Traditionally, SSVEP data is collected from the occipital region using electrodes with or without gel, normally mounted on a head cap. In this experimental study, we develop an in-ear electrode to collect SSVEP data for four different flicker frequencies and compare against occipital scalp electrode data. Data from five participants demonstrates the feasibility of in-ear electrode based SSVEP, significantly enhancing the practicability of wearable BCI applications.


2018 ◽  
Vol 28 (10) ◽  
pp. 1850034 ◽  
Author(s):  
Wei Li ◽  
Mengfan Li ◽  
Huihui Zhou ◽  
Genshe Chen ◽  
Jing Jin ◽  
...  

Increasing command generation rate of an event-related potential-based brain-robot system is challenging, because of limited information transfer rate of a brain-computer interface system. To improve the rate, we propose a dual stimuli approach that is flashing a robot image and is scanning another robot image simultaneously. Two kinds of event-related potentials, N200 and P300 potentials, evoked in this dual stimuli condition are decoded by a convolutional neural network. Compared with the traditional approaches, this proposed approach significantly improves the online information transfer rate from 23.0 or 17.8 to 39.1 bits/min at an accuracy of 91.7%. These results suggest that combining multiple types of stimuli to evoke distinguishable ERPs might be a promising direction to improve the command generation rate in the brain-computer interface.


2013 ◽  
Author(s):  
Zacharias Vamvakousis ◽  
Rafael Ramirez

P300-based brain-computer interfaces (BCIs) are especially useful for people with illnesses, which prevent them from communicating in a normal way (e.g. brain or spinal cord injury). However, most of the existing P300-based BCI systems use visual stimulation which may not be suitable for patients with sight deterioration (e.g. patients suffering from amyotrophic lateral sclerosis). Moreover, P300-based BCI systems rely on expensive equipment, which greatly limits their use outside the clinical environment. Therefore, we propose a multi-class BCI system based solely on auditory stimuli, which makes use of low-cost EEG technology. We explored different combinations of timbre, pitch and spatial auditory stimuli (TimPiSp: timbre-pitch-spatial, TimSp: timbre-spatial, and Timb: timbre-only) and three inter-stimulus intervals (150ms, 175ms and 300ms), and evaluated our system by conducting an oddball task on 7 healthy subjects. This is the first study in which these 3 auditory cues are compared. After averaging several repetitions in the 175ms inter-stimulus interval, we obtained average selection accuracies of 97.14%, 91.43%, and 88.57% for modalities TimPiSp, TimSp, and Timb, respectively. Best subject’s accuracy was 100% in all modalities and inter-stimulus intervals. Average information transfer rate for the 150ms inter-stimulus interval in the TimPiSp modality was 14.85 bits/min. Best subject’s information transfer rate was 39.96 bits/min for 175ms Timbre condition. Based on the TimPiSp modality, an auditory P300 speller was implemented and evaluated by asking users to type a 12-characters-long phrase. Six out of 7 users completed the task. The average spelling speed was 0.56 chars/min and best subject’s performance was 0.84 chars/min. The obtained results show that the proposed auditory BCI is successful with healthy subjects and may constitute the basis for future implementations of more practical and affordable auditory P300-based BCI systems.


Sign in / Sign up

Export Citation Format

Share Document