scholarly journals Classification of Drowsiness Levels Based on a Deep Spatio-Temporal Convolutional Bidirectional LSTM Network Using Electroencephalography Signals

2019 ◽  
Vol 9 (12) ◽  
pp. 348 ◽  
Author(s):  
Ji-Hoon Jeong ◽  
Baek-Woon Yu ◽  
Dae-Hyeok Lee ◽  
Seong-Whan Lee

Non-invasive brain-computer interfaces (BCI) have been developed for recognizing human mental states with high accuracy and for decoding various types of mental conditions. In particular, accurately decoding a pilot’s mental state is a critical issue as more than 70% of aviation accidents are caused by human factors, such as fatigue or drowsiness. In this study, we report the classification of not only two mental states (i.e., alert and drowsy states) but also five drowsiness levels from electroencephalogram (EEG) signals. To the best of our knowledge, this approach is the first to classify drowsiness levels in detail using only EEG signals. We acquired EEG data from ten pilots in a simulated night flight environment. For accurate detection, we proposed a deep spatio-temporal convolutional bidirectional long short-term memory network (DSTCLN) model. We evaluated the classification performance using Karolinska sleepiness scale (KSS) values for two mental states and five drowsiness levels. The grand-averaged classification accuracies were 0.87 (±0.01) and 0.69 (±0.02), respectively. Hence, we demonstrated the feasibility of classifying five drowsiness levels with high accuracy using deep learning.

2020 ◽  
Author(s):  
Supriya Sarker ◽  
Md Mokammel Haque

The proposed work develops a Long Short Term Memory (LSTM) model for multi class classification of driving maneuver from sensor fusion time series dataset. The work also analyzes the significance of sensor fusion data change rule and utilized the idea with deep learning time series multi class classification of driving maneuver. We also proposed some hypotheses which are proven by the experimental results. The proposed model provides Train Accuracy: 99.98, Test Accuracy: 97.2021, Precision: 0.974848, Recall: 0.960154 and F1 score: 0.967028. The Mean Per Class Error (MPCE) is 0.01386. The significant rules can accelerate the feature extraction process of driving data. Moreover, it helps in automatic labeling of unlabeled dataset. Our future approach is to develop a tool for generating categorical label for unlabeled dataset. Besides, we have plan to optimize the proposed classifier using grid search. <br>


Entropy ◽  
2020 ◽  
Vol 22 (5) ◽  
pp. 517 ◽  
Author(s):  
Ali M. Hasan ◽  
Mohammed M. AL-Jawad ◽  
Hamid A. Jalab ◽  
Hadil Shaiba ◽  
Rabha W. Ibrahim ◽  
...  

Many health systems over the world have collapsed due to limited capacity and a dramatic increase of suspected COVID-19 cases. What has emerged is the need for finding an efficient, quick and accurate method to mitigate the overloading of radiologists’ efforts to diagnose the suspected cases. This study presents the combination of deep learning of extracted features with the Q-deformed entropy handcrafted features for discriminating between COVID-19 coronavirus, pneumonia and healthy computed tomography (CT) lung scans. In this study, pre-processing is used to reduce the effect of intensity variations between CT slices. Then histogram thresholding is used to isolate the background of the CT lung scan. Each CT lung scan undergoes a feature extraction which involves deep learning and a Q-deformed entropy algorithm. The obtained features are classified using a long short-term memory (LSTM) neural network classifier. Subsequently, combining all extracted features significantly improves the performance of the LSTM network to precisely discriminate between COVID-19, pneumonia and healthy cases. The maximum achieved accuracy for classifying the collected dataset comprising 321 patients is 99.68%.


Sensors ◽  
2018 ◽  
Vol 18 (10) ◽  
pp. 3226 ◽  
Author(s):  
Lingfeng Xu ◽  
Xiang Chen ◽  
Shuai Cao ◽  
Xu Zhang ◽  
Xun Chen

To find out the feasibility of different neural networks in sEMG-based force estimation, in this paper, three types of networks, namely convolutional neural network (CNN), long short-term memory (LSTM) network and their combination (C-LSTM) were applied to predict muscle force generated in static isometric elbow flexion across three different circumstances (multi-subject, subject-dependent and subject-independent). Eight healthy men were recruited for the experiments, and the results demonstrated that all the three models were applicable for force estimation, and LSTM and C-LSTM achieved better performances. Even under subject-independent situation, they maintained mean RMSE% of as low as 9.07 ± 1.29 and 8.67 ± 1.14. CNN turned out to be a worse choice, yielding a mean RMSE% of 12.13 ± 1.98. To our knowledge, this work was the first to employ CNN, LSTM and C-LSTM in sEMG-based force estimation, and the results not only prove the strength of the proposed networks, but also pointed out a potential way of achieving high accuracy in real-time, subject-independent force estimation.


2021 ◽  
Vol 13 (5) ◽  
pp. 103
Author(s):  
Giulia Bressan ◽  
Giulia Cisotto ◽  
Gernot R. Müller-Putz ◽  
Selina Christin Wriessnegger

The classification of different fine hand movements from electroencephalogram (EEG) signals represents a relevant research challenge, e.g., in BCI applications for motor rehabilitation. Here, we analyzed two different datasets where fine hand movements (touch, grasp, palmar, and lateral grasp) were performed in a self-paced modality. We trained and tested a newly proposed CNN, and we compared its classification performance with two well-established machine learning models, namely, shrinkage-linear discriminant analysis (LDA) and Random Forest (RF). Compared to previous literature, we included neuroscientific evidence, and we trained our Convolutional Neural Network (CNN) model on the so-called movement-related cortical potentials (MRCPs). They are EEG amplitude modulations at low frequencies, i.e., (0.3,3) Hz that have been proved to encode several properties of the movements, e.g., type of grasp, force level, and speed. We showed that CNN achieved good performance in both datasets (accuracy of 0.70±0.11 and 0.64±0.10, for the two datasets, respectively), and they were similar or superior to the baseline models (accuracy of 0.68±0.10 and 0.62±0.07 with sLDA; accuracy of 0.70±0.15 and 0.61±0.07 with RF, with comparable performance in precision and recall). In addition, compared to the baseline, our CNN requires a faster pre-processing procedure, paving the way for its possible use in online BCI applications.


2021 ◽  
Vol 14 (4) ◽  
pp. 2408-2418 ◽  
Author(s):  
Tonny I. Okedi ◽  
Adrian C. Fisher

LSTM networks are shown to predict the seasonal component of biophotovoltaic current density and photoresponse to high accuracy.


2020 ◽  
Vol 6 (1) ◽  
pp. 45
Author(s):  
Yesy Diah Rosita ◽  
Yanuarini Nur Sukmaningtyas

Background of the study: Giving book code by a librarian in accordance with the Decimal Dewey Classification system aims to facilitate the search for books on the shelf precisely and quickly. Purpose: The first step in giving code to determine the class of books is the principal division which has 10 classes.Method: This study proposed Optical Character Recognition to read the title text on the book cover, preprocessing the text, and classifying it by Long Short-Term Memory Neural Network. Findings: In general, a librarian labeled a book by reading the book title on the book cover and doing book class matching with the book guide of DDC. Automatically, the task requires time increasingly. We tried to classify the text without OCR and utilize OCR which functions to convert the text in images into text that is editable. BY the experimental result, the level of classification accuracy without utilizing OCR is higher than using OCR. Conclusion: The magnitude of the accuracy is 88.57% and 74.28% respectively. However, the participation of OCR in this classification is quite efficient enough to assist a beginner librarian to overcome this problem because the accuracy difference is less than 15%.


Sign in / Sign up

Export Citation Format

Share Document