scholarly journals A PROPOSED SIGN LANGUAGE MODEL FOR SPEECHLESS PERSONS USING EEG SIGNALS

2021 ◽  
Vol 4 (3) ◽  
pp. 23-29
Author(s):  
Areej H. Al-Anbary ◽  
Salih M. Al-Qaraawi ‎

Recently, algorithms of machine learning are widely used with the field of electroencephalography (EEG)-Brain-Computer interfaces (BCI). In this paper, a sign language software model based on the EEG brain signal was implemented, to help the speechless persons to communicate their thoughts to others.  The preprocessing stage for the EEG signals was performed by applying the Principle Component Analysis (PCA) algorithm to extract the important features and reducing the data redundancy. A model for classifying ten classes of EEG signals, including  Facial Expression(FE) and some Motor Execution(ME) processes, had been designed. A neural network of three hidden layers with deep learning classifier had been used in this work. Data set from four different subjects were collected using a 14 channels Emotiv epoc+ device. A classification results with accuracy 95.75% were obtained ‎for the collected samples. An optimization process was performed on the predicted class with the aid of user, and then sign class will be connected to the specified sentence under a predesigned lock up table.

2016 ◽  
Vol 78 (12-2) ◽  
Author(s):  
Norma Alias ◽  
Husna Mohamad Mohsin ◽  
Maizatul Nadirah Mustaffa ◽  
Siti Hafilah Mohd Saimi ◽  
Ridhwan Reyaz

Eye movement behaviour is related to human brain activation either during asleep or awake. The aim of this paper is to measure the three types of eye movement by using the data classification of electroencephalogram (EEG) signals. It will be illustrated and train using the artificial neural network (ANN) method, in which the measurement of eye movement is based on eye blinks close and open, moves to the left and right as well as eye movement upwards and downwards. The integrated of ANN with EEG digital data signals is to train the large-scale digital data and thus predict the eye movement behaviour with stress activity. Since this study is using large-scale digital data, the parallelization of integrated ANN with EEG signals has been implemented on Compute Unified Device Architecture (CUDA) supported by heterogeneous CPU-GPU systems. The real data set from eye therapy industry, IC Herbz Sdn Bhd was carried out in order to validate and simulate the eye movement behaviour. Parallel performance analyses can be captured based on execution time, speedup, efficiency, and computational complexity.


2021 ◽  
Vol 15 ◽  
Author(s):  
Sai Kalyan Ranga Singanamalla ◽  
Chin-Teng Lin

With the advent of advanced machine learning methods, the performance of brain–computer interfaces (BCIs) has improved unprecedentedly. However, electroencephalography (EEG), a commonly used brain imaging method for BCI, is characterized by a tedious experimental setup, frequent data loss due to artifacts, and is time consuming for bulk trial recordings to take advantage of the capabilities of deep learning classifiers. Some studies have tried to address this issue by generating artificial EEG signals. However, a few of these methods are limited in retaining the prominent features or biomarker of the signal. And, other deep learning-based generative methods require a huge number of samples for training, and a majority of these models can handle data augmentation of one category or class of data at any training session. Therefore, there exists a necessity for a generative model that can generate synthetic EEG samples with as few available trials as possible and generate multi-class while retaining the biomarker of the signal. Since EEG signal represents an accumulation of action potentials from neuronal populations beneath the scalp surface and as spiking neural network (SNN), a biologically closer artificial neural network, communicates via spiking behavior, we propose an SNN-based approach using surrogate-gradient descent learning to reconstruct and generate multi-class artificial EEG signals from just a few original samples. The network was employed for augmenting motor imagery (MI) and steady-state visually evoked potential (SSVEP) data. These artificial data are further validated through classification and correlation metrics to assess its resemblance with original data and in-turn enhanced the MI classification performance.


2021 ◽  
Vol 14 ◽  
Author(s):  
Guangcheng Bao ◽  
Ning Zhuang ◽  
Li Tong ◽  
Bin Yan ◽  
Jun Shu ◽  
...  

Emotion recognition plays an important part in human-computer interaction (HCI). Currently, the main challenge in electroencephalogram (EEG)-based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network (TDANN) to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are then passed through TDANN for two-level domain confusion. The first level uses the maximum mean discrepancy (MMD) to reduce the distribution discrepancy of deep features between source domain and target domain, and the second uses the domain adversarial neural network (DANN) to force the deep features closer to their corresponding class centers. We evaluated the domain-transfer performance of the model on both our self-built data set and the public data set SEED. In the cross-day transfer experiment, the ability to accurately discriminate joy from other emotions was high: sadness (84%), anger (87.04%), and fear (85.32%) on the self-built data set. The accuracy reached 74.93% on the SEED data set. In the cross-subject transfer experiment, the ability to accurately discriminate joy from other emotions was equally high: sadness (83.79%), anger (84.13%), and fear (81.72%) on the self-built data set. The average accuracy reached 87.9% on the SEED data set, which was higher than WGAN-DA. The experimental results demonstrate that the proposed TDANN can effectively handle the domain transfer problem in EEG-based emotion recognition.


2021 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Shruti Garg ◽  
Rahul Kumar Patro ◽  
Soumyajit Behera ◽  
Neha Prerna Tigga ◽  
Ranjita Pandey

PurposeThe purpose of this study is to propose an alternative efficient 3D emotion recognition model for variable-length electroencephalogram (EEG) data.Design/methodology/approachClassical AMIGOS data set which comprises of multimodal records of varying lengths on mood, personality and other physiological aspects on emotional response is used for empirical assessment of the proposed overlapping sliding window (OSW) modelling framework. Two features are extracted using Fourier and Wavelet transforms: normalised band power (NBP) and normalised wavelet energy (NWE), respectively. The arousal, valence and dominance (AVD) emotions are predicted using one-dimension (1D) and two-dimensional (2D) convolution neural network (CNN) for both single and combined features.FindingsThe two-dimensional convolution neural network (2D CNN) outcomes on EEG signals of AMIGOS data set are observed to yield the highest accuracy, that is 96.63%, 95.87% and 96.30% for AVD, respectively, which is evidenced to be at least 6% higher as compared to the other available competitive approaches.Originality/valueThe present work is focussed on the less explored, complex AMIGOS (2018) data set which is imbalanced and of variable length. EEG emotion recognition-based work is widely available on simpler data sets. The following are the challenges of the AMIGOS data set addressed in the present work: handling of tensor form data; proposing an efficient method for generating sufficient equal-length samples corresponding to imbalanced and variable-length data.; selecting a suitable machine learning/deep learning model; improving the accuracy of the applied model.


2019 ◽  
Vol 9 (5) ◽  
pp. 115 ◽  
Author(s):  
Ömer Türk ◽  
Mehmet Siraç Özerdem

The studies implemented with Electroencephalogram (EEG) signals are progressing very rapidly and brain computer interfaces (BCI) and disease determinations are carried out at certain success rates thanks to new methods developed in this field. The effective use of these signals, especially in disease detection, is very important in terms of both time and cost. Currently, in general, EEG studies are used in addition to conventional methods as well as deep learning networks that have recently achieved great success. The most important reason for this is that in conventional methods, increasing classification accuracy is based on too many human efforts as EEG is being processed, obtaining the features is the most important step. This stage is based on both the time-consuming and the investigation of many feature methods. Therefore, there is a need for methods that do not require human effort in this area and can learn the features themselves. Based on that, two-dimensional (2D) frequency-time scalograms were obtained in this study by applying Continuous Wavelet Transform to EEG records containing five different classes. Convolutional Neural Network structure was used to learn the properties of these scalogram images and the classification performance of the structure was compared with the studies in the literature. In order to compare the performance of the proposed method, the data set of the University of Bonn was used. The data set consists of five EEG records containing healthy and epilepsy disease which are labeled as A, B, C, D, and E. In the study, A-E and B-E data sets were classified as 99.50%, A-D and B-D data sets were classified as 100% in binary classifications, A-D-E data sets were 99.00% in triple classification, A-C-D-E data sets were 90.50%, B-C-D-E data sets were 91.50% in quaternary classification, and A-B-C-D-E data sets were in the fifth class classification with an accuracy of 93.60%.


2022 ◽  
Vol 34 (3) ◽  
pp. 0-0

Financial status and its role in the national economy have been increasingly recognized. In order to deduce the source of monetary funds and determine their whereabouts, financial information and prediction have become a scientific method that can not be ignored in the development of national economy. This paper improves the existing CNN and applies it to financial credit from different perspectives. Firstly, the noise of the collected data set is deleted, and then the clustering result is more stable by principal component analysis. The observation vectors are segmented to obtain a set of observation vectors corresponding to each hidden state. Based on the output of PCA algorithm, we recalculate the mean and variance of all kinds of observation vectors, and use the new mean and covariance matrix as credit financial credit, and then determine the best model parameters.The empirical results based on specific data from China's stock market show that the improved convolutional neural network proposed in this paper has advantages and the prediction accuracy reaches.


2020 ◽  
Vol 38 (4A) ◽  
pp. 510-514
Author(s):  
Tay H. Shihab ◽  
Amjed N. Al-Hameedawi ◽  
Ammar M. Hamza

In this paper to make use of complementary potential in the mapping of LULC spatial data is acquired from LandSat 8 OLI sensor images are taken in 2019.  They have been rectified, enhanced and then classified according to Random forest (RF) and artificial neural network (ANN) methods. Optical remote sensing images have been used to get information on the status of LULC classification, and extraction details. The classification of both satellite image types is used to extract features and to analyse LULC of the study area. The results of the classification showed that the artificial neural network method outperforms the random forest method. The required image processing has been made for Optical Remote Sensing Data to be used in LULC mapping, include the geometric correction, Image Enhancements, The overall accuracy when using the ANN methods 0.91 and the kappa accuracy was found 0.89 for the training data set. While the overall accuracy and the kappa accuracy of the test dataset were found 0.89 and 0.87 respectively.


2019 ◽  
Vol 8 (3) ◽  
pp. 6634-6643 ◽  

Opinion mining and sentiment analysis are valuable to extract the useful subjective information out of text documents. Predicting the customer’s opinion on amazon products has several benefits like reducing customer churn, agent monitoring, handling multiple customers, tracking overall customer satisfaction, quick escalations, and upselling opportunities. However, performing sentiment analysis is a challenging task for the researchers in order to find the users sentiments from the large datasets, because of its unstructured nature, slangs, misspells and abbreviations. To address this problem, a new proposed system is developed in this research study. Here, the proposed system comprises of four major phases; data collection, pre-processing, key word extraction, and classification. Initially, the input data were collected from the dataset: amazon customer review. After collecting the data, preprocessing was carried-out for enhancing the quality of collected data. The pre-processing phase comprises of three systems; lemmatization, review spam detection, and removal of stop-words and URLs. Then, an effective topic modelling approach Latent Dirichlet Allocation (LDA) along with modified Possibilistic Fuzzy C-Means (PFCM) was applied to extract the keywords and also helps in identifying the concerned topics. The extracted keywords were classified into three forms (positive, negative and neutral) by applying an effective machine learning classifier: Convolutional Neural Network (CNN). The experimental outcome showed that the proposed system enhanced the accuracy in sentiment analysis up to 6-20% related to the existing systems.


1992 ◽  
Author(s):  
Rupert S. Hawkins ◽  
K. F. Heideman ◽  
Ira G. Smotroff

Sign in / Sign up

Export Citation Format

Share Document