scholarly journals Using Deep Learning for Human Computer Interface via Electroencephalography

Author(s):  
Sangram Redkar

<table class="Heading1Char" width="593" border="0" cellspacing="0" cellpadding="0"><tbody><tr><td valign="top" width="387"><p>In this paper, several techniques used to perform EEG signal pre-processing, feature extraction and signal classification have been discussed, implemented, validated and verified; efficient supervised and unsupervised machine learning models, for the EEG motor imagery classification are identified. Brain Computer Interfaces are becoming the next generation controllers not only in the medical devices for disabled individuals but also in the gaming and entertainment industries. In order to build an effective Brain Computer Interface, it is important to have robust signal processing and machine learning modules which operate on the EEG signals and estimate the current thought or intent of the user. Motor Imagery (imaginary hand and leg movements) signals are acquired using the Emotiv EEG headset. The signal have been extracted and supplied to the machine learning (ML) stage, wherein, several ML techniques are applied and validated. The performances of various ML techniques are compared and some important observations are reported. Further, Deep Learning techniques like autoencoding have been used to perform unsupervised feature learning. The reliability of the features is presented and analyzed by performing classification by using the ML techniques. It is shown that hand engineered ‘ad-hoc’ feature extraction techniques are less reliable than the automated (‘Deep Learning’) feature learning techniques. All the findings in this research, can be used by the BCI research community for building motor imagery based BCI applications such as Gaming, Robot control and autonomous vehicles.</p></td></tr></tbody></table>

2021 ◽  
Vol 9 ◽  
Author(s):  
Ashwini K ◽  
P. M. Durai Raj Vincent ◽  
Kathiravan Srinivasan ◽  
Chuan-Yu Chang

Neonatal infants communicate with us through cries. The infant cry signals have distinct patterns depending on the purpose of the cries. Preprocessing, feature extraction, and feature selection need expert attention and take much effort in audio signals in recent days. In deep learning techniques, it automatically extracts and selects the most important features. For this, it requires an enormous amount of data for effective classification. This work mainly discriminates the neonatal cries into pain, hunger, and sleepiness. The neonatal cry auditory signals are transformed into a spectrogram image by utilizing the short-time Fourier transform (STFT) technique. The deep convolutional neural network (DCNN) technique takes the spectrogram images for input. The features are obtained from the convolutional neural network and are passed to the support vector machine (SVM) classifier. Machine learning technique classifies neonatal cries. This work combines the advantages of machine learning and deep learning techniques to get the best results even with a moderate number of data samples. The experimental result shows that CNN-based feature extraction and SVM classifier provides promising results. While comparing the SVM-based kernel techniques, namely radial basis function (RBF), linear and polynomial, it is found that SVM-RBF provides the highest accuracy of kernel-based infant cry classification system provides 88.89% accuracy.


Sensors ◽  
2019 ◽  
Vol 19 (7) ◽  
pp. 1736 ◽  
Author(s):  
Ikhtiyor Majidov ◽  
Taegkeun Whangbo

Single-trial motor imagery classification is a crucial aspect of brain–computer applications. Therefore, it is necessary to extract and discriminate signal features involving motor imagery movements. Riemannian geometry-based feature extraction methods are effective when designing these types of motor-imagery-based brain–computer interface applications. In the field of information theory, Riemannian geometry is mainly used with covariance matrices. Accordingly, investigations showed that if the method is used after the execution of the filterbank approach, the covariance matrix preserves the frequency and spatial information of the signal. Deep-learning methods are superior when the data availability is abundant and while there is a large number of features. The purpose of this study is to a) show how to use a single deep-learning-based classifier in conjunction with BCI (brain–computer interface) applications with the CSP (common spatial features) and the Riemannian geometry feature extraction methods in BCI applications and to b) describe one of the wrapper feature-selection algorithms, referred to as the particle swarm optimization, in combination with a decision tree algorithm. In this work, the CSP method was used for a multiclass case by using only one classifier. Additionally, a combination of power spectrum density features with covariance matrices mapped onto the tangent space of a Riemannian manifold was used. Furthermore, the particle swarm optimization method was implied to ease the training by penalizing bad features, and the moving windows method was used for augmentation. After empirical study, the convolutional neural network was adopted to classify the pre-processed data. Our proposed method improved the classification accuracy for several subjects that comprised the well-known BCI competition IV 2a dataset.


Brain Computer Interface is a paralyzed system. This system is used for direct communication between brain nerves and computer devices. BCI is an imagery movement of the patients who are all unable to communicate with the people. In EEG signals feature extraction plays an important role. Statistical based features are essential feature being used in machine learning applications. Researchers mainly focus on the filters and feature extraction techniques. In this paper data are collected from the BCI Competition III dataset 1a. Statistical features like minimum, maximum, standard deviation, variance, skewnesss, kurtosis, root mean square, average, energy, contrast, correlation and Homogeneity are extracted. Classification is done using machine learning techniques such as Support Vector Machine, Artificial Neural Network and K-Nearest Neighbor. In the proposed system 90.6% accuracy is achieved


Sensors ◽  
2021 ◽  
Vol 21 (14) ◽  
pp. 4838
Author(s):  
Philip Gouverneur ◽  
Frédéric Li ◽  
Wacław M. Adamczyk ◽  
Tibor M. Szikszay ◽  
Kerstin Luedtke ◽  
...  

While even the most common definition of pain is under debate, pain assessment has remained the same for decades. But the paramount importance of precise pain management for successful healthcare has encouraged initiatives to improve the way pain is assessed. Recent approaches have proposed automatic pain evaluation systems using machine learning models trained with data coming from behavioural or physiological sensors. Although yielding promising results, machine learning studies for sensor-based pain recognition remain scattered and not necessarily easy to compare to each other. In particular, the important process of extracting features is usually optimised towards specific datasets. We thus introduce a comparison of feature extraction methods for pain recognition based on physiological sensors in this paper. In addition, the PainMonit Database (PMDB), a new dataset including both objective and subjective annotations for heat-induced pain in 52 subjects, is introduced. In total, five different approaches including techniques based on feature engineering and feature learning with deep learning are evaluated on the BioVid and PMDB datasets. Our studies highlight the following insights: (1) Simple feature engineering approaches can still compete with deep learning approaches in terms of performance. (2) More complex deep learning architectures do not yield better performance compared to simpler ones. (3) Subjective self-reports by subjects can be used instead of objective temperature-based annotations to build a robust pain recognition system.


2019 ◽  
Vol 8 (1) ◽  
pp. 269-275 ◽  
Author(s):  
N. E. Md Isa ◽  
A. Amir ◽  
M. Z. Ilyas ◽  
M. S. Razalli

This paper focuses on classification of motor imagery in Brain Computer Interface (BCI) by using classifiers from machine learning technique. The BCI system consists of two main steps which are feature extraction and classification. The Fast Fourier Transform (FFT) features is extracted from the electroencephalography (EEG) signals to transform the signals into frequency domain. Due to the high dimensionality of data resulting from the feature extraction stage, the Linear Discriminant Analysis (LDA) is used to minimize the number of dimension by finding the feature subspace that optimizes class separability. Five classifiers: Support Vector Machine (SVM), K-Nearest Neighbors (KNN), Naïve Bayes, Decision Tree and Logistic Regression are used in the study. The performance was tested by using Dataset 1 from BCI Competition IV which consists of imaginary hand and foot movement EEG data. As a result, SVM, Logistic Regression and Naïve Bayes classifier achieved the highest accuracy with 89.09% in AUC measurement.


2021 ◽  
Vol 22 (6) ◽  
pp. 2903
Author(s):  
Noam Auslander ◽  
Ayal B. Gussow ◽  
Eugene V. Koonin

The exponential growth of biomedical data in recent years has urged the application of numerous machine learning techniques to address emerging problems in biology and clinical research. By enabling the automatic feature extraction, selection, and generation of predictive models, these methods can be used to efficiently study complex biological systems. Machine learning techniques are frequently integrated with bioinformatic methods, as well as curated databases and biological networks, to enhance training and validation, identify the best interpretable features, and enable feature and model investigation. Here, we review recently developed methods that incorporate machine learning within the same framework with techniques from molecular evolution, protein structure analysis, systems biology, and disease genomics. We outline the challenges posed for machine learning, and, in particular, deep learning in biomedicine, and suggest unique opportunities for machine learning techniques integrated with established bioinformatics approaches to overcome some of these challenges.


2020 ◽  
Vol 8 (5) ◽  
pp. 1160-1166

In this paper existing writing for computer added diagnosis (CAD) based identification of lesions that might be connected in the early finding of Diabetic Retinopathy (DR) is talked about. The recognition of sores, for example, Microaneurysms (MA), Hemorrhages (HEM) and Exudates (EX) are incorporated in this paper. A range of methodologies starting from conventional morphology to deep learning techniques have been discussed. The different strategies like hand crafted feature extraction to automated CNN based component extraction, single lesion identification to multi sore recognition have been explored. The different stages in each methods beginning from the image preprocessing to classification stage are investigated. The exhibition of the proposed strategies are outlined by various performance measurement parameters and their used data sets are tabulated. Toward the end we examined the future headings.


Author(s):  
Hamdi Altaheri ◽  
Ghulam Muhammad ◽  
Mansour Alsulaiman ◽  
Syed Umar Amin ◽  
Ghadir Ali Altuwaijri ◽  
...  

Sign in / Sign up

Export Citation Format

Share Document