Computer Interface
Recently Published Documents





2021 ◽  
Vol 2021 ◽  
pp. 1-9
Hatim Z. Almarzouki ◽  
Hemaid Alsulami ◽  
Ali Rizwan ◽  
Mohammed S. Basingab ◽  
Hatim Bukhari ◽  

In recent years, neurological diseases have become a standout amongst all the other diseases and are the most important reasons for mortality and morbidity all over the world. The current study’s aim is to conduct a pilot study for testing the prototype of the designed glove-wearable technology that could detect and analyze the heart rate and EEG for better management and avoiding stroke consequences. The qualitative, clinical experimental method of assessment was explored by incorporating use of an IoT-based real-time assessing medical glove that was designed using heart rate-based and EEG-based sensors. We conducted structured interviews with 90 patients, and the results of the interviews were analyzed by using the Barthel index and were grouped accordingly. Overall, the proportion of patients who followed proper daily heart rate recording behavior went from 46.9% in the first month of the trial to 78.2% after 3–10 months of the interventions. Meanwhile, the percentage of individuals having an irregular heart rate fell from 19.5% in the first month of the trial to 9.1% after 3–10 months of intervention research. In T5, we found that delta relative power decreased by 12.1% and 5.8% compared with baseline at 3 and at 6 months and an average increase was 24.3 ± 0.08. Beta-1 remained relatively steady, while theta relative power grew by 7% and alpha relative power increased by 31%. The T1 hemisphere had greater mean values of delta and theta relative power than the T5 hemisphere. For alpha ( p  < 0.05) and beta relative power, the opposite pattern was seen. The distinction was statistically significant for delta ( p  < 0.001), alpha ( p  < 0.01), and beta-1 ( p  < 0.05) among T1 and T5 patient groups. In conclusion, our single center-based study found that such IoT-based real-time medical monitoring devices significantly reduce the complexity of real-time monitoring and data acquisition processes for a healthcare provider and thus provide better healthcare management. The emergence of significant risks and controlling mechanisms can be improved by boosting the awareness. Furthermore, it identifies the high-risk factors besides facilitating the prevention of strokes. The EEG-based brain-computer interface has a promising future in upcoming years to avert DALY.

2021 ◽  
Zhenyu Jin

Object. Electroencephalography (EEG) signals suffer from a low signal-to-noise ratio and are very susceptible to muscular, ambient noise, and other artifacts. Many artifact removal algorithms have been proposed to address this problem. However, the evaluation of these algorithms is conventionally too indirect (e.g., black-box comparisons of brain-computer interface performance before and after removal) because it is unclear which part of the signal represents raw EEG and which is noise. This project objectively benchmarks popular artifact removal algorithms and evaluates the fundamental Independent Component Analysis (ICA) approach thanks to a unique dataset where EEG is recorded simultaneously with other physiological signals-facial electromyography (EMG), accelerometers, and gyroscope-while ten subjects perform several repetitions of common artifact-inflicting tasks (blinking, speaking, etc.). Approach. I have compared the correlation between EEG signals and the artifact-representing channels before and after applying an artifact removal algorithm across the different artifact-inflicting tasks. The extent to which an artifact removal method can reduce this correlation objectively quantifies its effectiveness for the different artifacts. In the same direction, I have determined to what extent ICA successfully detects artefactual components in EEG by comparing the corresponding correlations for independent components that are labeled as artifacts with those labeled as EEG. Main result. The FORCe was found to be the most effective and generic artifact removal method, cleaning almost 40% of artifacts. ICA is shown to be able to isolate almost 70% of artefactual components. Significance. This work alleviates the problem of unreliable evaluation of EEG artifact removal frameworks and provides the first reliable benchmark for the most popular algorithms in this literature.

2021 ◽  
Vol 11 (21) ◽  
pp. 9948
Amira Echtioui ◽  
Ayoub Mlaouah ◽  
Wassim Zouch ◽  
Mohamed Ghorbel ◽  
Chokri Mhiri ◽  

Recently, Electroencephalography (EEG) motor imagery (MI) signals have received increasing attention because it became possible to use these signals to encode a person’s intention to perform an action. Researchers have used MI signals to help people with partial or total paralysis, control devices such as exoskeletons, wheelchairs, prostheses, and even independent driving. Therefore, classifying the motor imagery tasks of these signals is important for a Brain-Computer Interface (BCI) system. Classifying the MI tasks from EEG signals is difficult to offer a good decoder due to the dynamic nature of the signal, its low signal-to-noise ratio, complexity, and dependence on the sensor positions. In this paper, we investigate five multilayer methods for classifying MI tasks: proposed methods based on Artificial Neural Network, Convolutional Neural Network 1 (CNN1), CNN2, CNN1 with CNN2 merged, and the modified CNN1 with CNN2 merged. These proposed methods use different spatial and temporal characteristics extracted from raw EEG data. We demonstrate that our proposed CNN1-based method outperforms state-of-the-art machine/deep learning techniques for EEG classification by an accuracy value of 68.77% and use spatial and frequency characteristics on the BCI Competition IV-2a dataset, which includes nine subjects performing four MI tasks (left/right hand, feet, and tongue). The experimental results demonstrate the feasibility of this proposed method for the classification of MI-EEG signals and can be applied successfully to BCI systems where the amount of data is large due to daily recording.

2021 ◽  
Vol 11 (11) ◽  
pp. 1392
Yue Hua ◽  
Xiaolong Zhong ◽  
Bingxue Zhang ◽  
Zhong Yin ◽  
Jianhua Zhang

Affective computing systems can decode cortical activities to facilitate emotional human–computer interaction. However, personalities exist in neurophysiological responses among different users of the brain–computer interface leads to a difficulty for designing a generic emotion recognizer that is adaptable to a novel individual. It thus brings an obstacle to achieve cross-subject emotion recognition (ER). To tackle this issue, in this study we propose a novel feature selection method, manifold feature fusion and dynamical feature selection (MF-DFS), under transfer learning principle to determine generalizable features that are stably sensitive to emotional variations. The MF-DFS framework takes the advantages of local geometrical information feature selection, domain adaptation based manifold learning, and dynamical feature selection to enhance the accuracy of the ER system. Based on three public databases, DEAP, MAHNOB-HCI and SEED, the performance of the MF-DFS is validated according to the leave-one-subject-out paradigm under two types of electroencephalography features. By defining three emotional classes of each affective dimension, the accuracy of the MF-DFS-based ER classifier is achieved at 0.50–0.48 (DEAP) and 0.46–0.50 (MAHNOBHCI) for arousal and valence emotional dimensions, respectively. For the SEED database, it achieves 0.40 for the valence dimension. The corresponding accuracy is significantly superior to several classical feature selection methods on multiple machine learning models.

Naishi Feng ◽  
Fo Hu ◽  
Hong Wang ◽  
Bin Zhou

Decoding brain intention from noninvasively measured neural signals has recently been a hot topic in brain-computer interface (BCI). The motor commands about the movements of fine parts can increase the degrees of freedom under control and be applied to external equipment without stimulus. In the decoding process, the classifier is one of the key factors, and the graph information of the EEG was ignored by most researchers. In this paper, a graph convolutional network (GCN) based on functional connectivity was proposed to decode the motor intention of four fine parts movements (shoulder, elbow, wrist, hand). First, event-related desynchronization was analyzed to reveal the differences between the four classes. Second, functional connectivity was constructed by using synchronization likelihood (SL), phase-locking value (PLV), H index (H), mutual information (MI), and weighted phase-lag index (WPLI) to acquire the electrode pairs with a difference. Subsequently, a GCN and convolutional neural networks (CNN) were performed based on functional topological structures and time points, respectively. The results demonstrated that the proposed method achieved a decoding accuracy of up to 92.81% in the four-class task. Besides, the combination of GCN and functional connectivity can promote the development of BCI.

2021 ◽  
Vol 363 ◽  
pp. 109339
Adrienne Kline ◽  
Nils D. Forkert ◽  
Banafshe Felfeliyan ◽  
Daniel Pittman ◽  
Bradley Goodyear ◽  

2021 ◽  
Attila Korik ◽  
Karl McCreadie ◽  
Niall McShane ◽  
Naomi Du Bois ◽  
Massoud Khodadadzadeh ◽  

Abstract Background: The brain-computer interface (BCI) race at the Cybathlon championship for athletes with disabilities challenges teams (BCI researchers, developers and pilots with spinal cord injury) to control an avatar on a virtual racetrack without movement. Here we describe the training regime and results of the Ulster University BCI Team pilot who is tetraplegic and has trained to use an electroencephalography (EEG)-based BCI intermittently over 10 years, to compete in three Cybathlon events. Methods: A multi-class, multiple binary classifier framework was used to decode three kinesthetically imagined movements (motor imagery) (left (L) and right (R) arm and feet (F)) as well as relax state (X). Three games paradigms were used for training i.e., NeuroSensi, Triad, and Cybathlon: BrainDriver. An evaluation of the pilot’s performance is presented for two Cybathlon competition training periods – spanning 20 sessions over 5 weeks prior to the 2019 competition, and 25 sessions over 5 weeks in the run up to the 2020 competition.Results: Having participated in BCI training in 2009 and competed in Cybathlon 2016, the experienced pilot achieved high two-class accuracy on all class pairs when training began in 2019 (decoding accuracy >90%, resulting in efficient NeuroSensi and Triad game control). The BrainDriver performance (i.e., Cybathlon race completion time) improved significantly during the training period, leading up to the competition day, ranging from 274s - 156s (255±24s to 191±14s mean±std), over 17 days (10 sessions) in 2019, and from 230s - 168s (214±14s to 181±4s), over 18 days (13 sessions) in 2020. However, on both competition occasions, towards the race date, the performance deteriorated significantly.Conclusions: The training regime and framework applied were highly effective in achieving competitive race completion times. The BCI framework did not cope with significant deviation in electroencephalography (EEG) observed in the sessions occurring shortly before and during the race day. Stress, arousal level and fatigue, associated with the competition challenge and performance pressure resulting in cognitive state changes, were likely contributing factors to the nonstationary effects that resulted in the BCI and pilot achieving suboptimal performance on race day. Trial registration: not registered

AI & Society ◽  
2021 ◽  
Aníbal Monasterio Astobiza ◽  
David Rodriguez Arias-Vailhen ◽  
Txetxu Ausín ◽  
Mario Toboso ◽  
Manuel Aparicio ◽  

AbstractTo assess—from a qualitative perspective—the perceptions and attitudes of Spanish rehabilitation professionals (e.g. rehabilitation doctors, speech therapists, physical therapists) about Brain–Computer Interface (BCI) technology. A qualitative, exploratory and descriptive study was carried out by means of interviews and analysis of textual content with mixed generation of categories and segmentation into frequency of topics. We present the results of three in-depth interviews that were conducted with Spanish speaking individuals who had previously completed a survey as part of a larger, 3-country/language, survey on BCI perceptions. 11 out of 15 of these Spanish respondents (survey) either strongly or somewhat accept the use of BCI in rehabilitation therapy. However, the results of our three in-depth interviews show how, due to a strong inertia of attitudes and perceptions about BCI technology, most professionals feel reluctant to use BCI technology in their daily practice (interview).

2021 ◽  
Xiuyu Huang ◽  
Nan Zhou ◽  
KupSze Choi

Abstract BackgroundIn the past few years, motor imagery brain-computer interface (MIBCI) has become a valuable assisting technology for the disabled. However, how to effectively improve the motor imagery (MI) classification performance by learning discriminative and robust features is still a challenging problem.MethodsIn this study, we propose a novel loss function, called correntropy-based center loss (CCL), as the supervision signal for the training of the convolutional neural network (CNN) model in the MI classification task. With joint supervision of the softmax loss and CCL, we can train a CNN model to acquire deep discriminative features with large inter-class dispersion and slight intra-class variation. Moreover, the CCL can also effectively decrease the negative effect of the noise during the training, which is essential to accurate MI classification.ResultsWe perform extensive experiments on two well-known public MI datasets, called BCI competition IV-2a and IV-2b, to demonstrate the effectiveness of the proposed loss. The result shows that our CNNs (with such joint supervision) achieve 78.65% and 86.10% on IV-2a and IV-2b and outperform other baseline approaches.ConclusionThe proposed CCL helps the learning process of the CNN model to obtain both discriminative and robust deeply learned features for the MI classification task in the BCI rehabilitation application.

2021 ◽  
Vol 5 (10) ◽  
pp. 64
Miguel Angel Garcia-Ruiz ◽  
Bill Kapralos ◽  
Genaro Rebolledo-Mendez

This paper describes an overview of olfactory displays (human–computer interfaces that generate and diffuse an odor to a user to stimulate their sense of smell) that have been proposed and researched for supporting education and training. Past research has shown that olfaction (the sense of smell) can support memorization of information, stimulate information recall, and help immerse learners and trainees into educational virtual environments, as well as complement and/or supplement other human sensory channels for learning. This paper begins with an introduction to olfaction and olfactory displays, and a review of techniques for storing, generating and diffusing odors at the computer interface. The paper proceeds with a discussion on educational theories that support olfactory displays for education and training, and a literature review on olfactory displays that support learning and training. Finally, the paper summarizes the advantages and challenges regarding the development and application of olfactory displays for education and training.

Sign in / Sign up

Export Citation Format

Share Document