scholarly journals Evaluate effects of multiple users in collaborative Brain-Computer Interfaces: A SSVEP study

2021 ◽  
Author(s):  
Tien-Thong Nguyen Do ◽  
Thanh Tung Huynh

AbstractThis study investigates the effects of collaboration on task performance in brain-computer interface (BCI) based on steady-state visually evoked potential (SSVEP). Navigation tasks were performed in a virtual environment under two conditions, e.g., individual performance and team performance. The results showed that average task completion time in the collaborative condition is decreased by 6 percent compared with that of individual performance, which is inline with other studies in collaborative BCI (cBCI) and joint decision-making. Our work is a step forward for the progress in BCI studies that include multi-user interactions.

2013 ◽  
Vol 4 (1) ◽  
pp. 1 ◽  
Author(s):  
Alessandro Luiz Stamatto Ferreira ◽  
Leonardo Cunha de Miranda ◽  
Erica Esteves Cunha de Miranda ◽  
Sarah Gomes Sakamoto

Brain-Computer Interface (BCI) enables users to interact with a computer only through their brain biological signals, without the need to use muscles. BCI is an emerging research area but it is still relatively immature. However, it is important to reflect on the different aspects of the Human-Computer Interaction (HCI) area related to BCIs, considering that BCIs will be part of interactive systems in the near future. BCIs most attend not only to handicapped users, but also healthy ones, improving interaction for end-users. Virtual Reality (VR) is also an important part of interactive systems, and combined with BCI could greatly enhance user interactions, improving the user experience by using brain signals as input with immersive environments as output. This paper addresses only noninvasive BCIs, since this kind of capture is the only one to not present risk to human health. As contributions of this work we highlight the survey of interactive systems based on BCIs focusing on HCI and VR applications, and a discussion on challenges and future of this subject matter.


Author(s):  
Chang S. Nam ◽  
Matthew Moore ◽  
Inchul Choi ◽  
Yueqing Li

Despite the increase in research interest in the brain–computer interface (BCI), there remains a general lack of understanding of, and even inattention to, human factors/ergonomics (HF/E) issues in BCI research and development. The goal of this article is to raise awareness of the importance of HF/E involvement in the emerging field of BCI technology by providing HF/E researchers with a brief guide on how to design and implement a cost-effective, steady-state visually evoked potential (SSVEP)–based BCI system. We also discuss how SSVEP BCI systems can be improved to accommodate users with special needs.


2021 ◽  
pp. 1-13
Author(s):  
Hamidreza Maymandi ◽  
Jorge Luis Perez Benitez ◽  
F. Gallegos-Funes ◽  
J. A. Perez Benitez

Author(s):  
Wakana Ishihara ◽  
Karen Moxon ◽  
Sheryl Ehrman ◽  
Mark Yarborough ◽  
Tina L. Panontin ◽  
...  

This systematic review addresses the plausibility of using novel feedback modalities for brain–computer interface (BCI) and attempts to identify the best feedback modality on the basis of the effectiveness or learning rate. Out of the chosen studies, it was found that 100% of studies tested visual feedback, 31.6% tested auditory feedback, 57.9% tested tactile feedback, and 21.1% tested proprioceptive feedback. Visual feedback was included in every study design because it was intrinsic to the response of the task (e.g. seeing a cursor move). However, when used alone, it was not very effective at improving accuracy or learning. Proprioceptive feedback was most successful at increasing the effectiveness of motor imagery BCI tasks involving neuroprosthetics. The use of auditory and tactile feedback resulted in mixed results. The limitations of this current study and further study recommendations are discussed.


2021 ◽  
Vol 8 (1) ◽  
Author(s):  
Dheeraj Rathee ◽  
Haider Raza ◽  
Sujit Roy ◽  
Girijesh Prasad

AbstractRecent advancements in magnetoencephalography (MEG)-based brain-computer interfaces (BCIs) have shown great potential. However, the performance of current MEG-BCI systems is still inadequate and one of the main reasons for this is the unavailability of open-source MEG-BCI datasets. MEG systems are expensive and hence MEG datasets are not readily available for researchers to develop effective and efficient BCI-related signal processing algorithms. In this work, we release a 306-channel MEG-BCI data recorded at 1KHz sampling frequency during four mental imagery tasks (i.e. hand imagery, feet imagery, subtraction imagery, and word generation imagery). The dataset contains two sessions of MEG recordings performed on separate days from 17 healthy participants using a typical BCI imagery paradigm. The current dataset will be the only publicly available MEG imagery BCI dataset as per our knowledge. The dataset can be used by the scientific community towards the development of novel pattern recognition machine learning methods to detect brain activities related to motor imagery and cognitive imagery tasks using MEG signals.


2020 ◽  
Vol 16 (2) ◽  
Author(s):  
Stanisław Karkosz ◽  
Marcin Jukiewicz

AbstractObjectivesOptimization of Brain-Computer Interface by detecting the minimal number of morphological features of signal that maximize accuracy.MethodsSystem of signal processing and morphological features extractor was designed, then the genetic algorithm was used to select such characteristics that maximize the accuracy of the signal’s frequency recognition in offline Brain-Computer Interface (BCI).ResultsThe designed system provides higher accuracy results than a previously developed system that uses the same preprocessing methods, however, different results were achieved for various subjects.ConclusionsIt is possible to enhance the previously developed BCI by combining it with morphological features extraction, however, it’s performance is dependent on subject variability.


Author(s):  
Yao Li ◽  
T. Kesavadas

Abstract One of the expectations for the next generation of industrial robots is to work collaboratively with humans as robotic co-workers. Robotic co-workers must be able to communicate with human collaborators intelligently and seamlessly. However, industrial robots in prevalence are not good at understanding human intentions and decisions. We demonstrate a steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) which can directly deliver human cognition to robots through a headset. The BCI is applied to a part-picking robot and sends decisions to the robot while operators visually inspecting the quality of parts. The BCI is verified through a human subject study. In the study, a camera by the side of the conveyor takes photos of each part and presents it to the operator automatically. When the operator looks at the photo, the electroencephalography (EEG) is collected through BCI. The inspection decision is extracted through SSVEPs in EEG. When a defective part is identified by the operator, the signal is communicated to the robot which locates the defective part through a second camera and removes it from the conveyor. The robot can grasp various part with our grasp planning algorithm (2FRG). We have developed a CNN-CCA model for SSVEP extraction. The model is trained on a dataset collected in our offline experiment. Our approach outperforms the existing CCA, CCA-SVM, and PSD-SVM models. The CNN-CCA is further validated in an online experiment that achieves 93% accuracy in identifying and removing a defective part.


Sign in / Sign up

Export Citation Format

Share Document