scholarly journals A Deep Learning Approach for Automatic Seizure Detection in Children With Epilepsy

2021 ◽  
Vol 15 ◽  
Author(s):  
Ahmed Abdelhameed ◽  
Magdy Bayoumi

Over the last few decades, electroencephalogram (EEG) has become one of the most vital tools used by physicians to diagnose several neurological disorders of the human brain and, in particular, to detect seizures. Because of its peculiar nature, the consequent impact of epileptic seizures on the quality of life of patients made the precise diagnosis of epilepsy extremely essential. Therefore, this article proposes a novel deep-learning approach for detecting seizures in pediatric patients based on the classification of raw multichannel EEG signal recordings that are minimally pre-processed. The new approach takes advantage of the automatic feature learning capabilities of a two-dimensional deep convolution autoencoder (2D-DCAE) linked to a neural network-based classifier to form a unified system that is trained in a supervised way to achieve the best classification accuracy between the ictal and interictal brain state signals. For testing and evaluating our approach, two models were designed and assessed using three different EEG data segment lengths and a 10-fold cross-validation scheme. Based on five evaluation metrics, the best performing model was a supervised deep convolutional autoencoder (SDCAE) model that uses a bidirectional long short-term memory (Bi-LSTM) – based classifier, and EEG segment length of 4 s. Using the public dataset collected from the Children’s Hospital Boston (CHB) and the Massachusetts Institute of Technology (MIT), this model has obtained 98.79 ± 0.53% accuracy, 98.72 ± 0.77% sensitivity, 98.86 ± 0.53% specificity, 98.86 ± 0.53% precision, and an F1-score of 98.79 ± 0.53%, respectively. Based on these results, our new approach was able to present one of the most effective seizure detection methods compared to other existing state-of-the-art methods applied to the same dataset.

2020 ◽  
Vol 17 (3) ◽  
pp. 299-305 ◽  
Author(s):  
Riaz Ahmad ◽  
Saeeda Naz ◽  
Muhammad Afzal ◽  
Sheikh Rashid ◽  
Marcus Liwicki ◽  
...  

This paper presents a deep learning benchmark on a complex dataset known as KFUPM Handwritten Arabic TexT (KHATT). The KHATT data-set consists of complex patterns of handwritten Arabic text-lines. This paper contributes mainly in three aspects i.e., (1) pre-processing, (2) deep learning based approach, and (3) data-augmentation. The pre-processing step includes pruning of white extra spaces plus de-skewing the skewed text-lines. We deploy a deep learning approach based on Multi-Dimensional Long Short-Term Memory (MDLSTM) networks and Connectionist Temporal Classification (CTC). The MDLSTM has the advantage of scanning the Arabic text-lines in all directions (horizontal and vertical) to cover dots, diacritics, strokes and fine inflammation. The data-augmentation with a deep learning approach proves to achieve better and promising improvement in results by gaining 80.02% Character Recognition (CR) over 75.08% as baseline.


2020 ◽  
Vol 2 (2) ◽  
Author(s):  
Mamunur Rashid ◽  
Minarul Islam ◽  
Norizam Sulaiman ◽  
Bifta Sama Bari ◽  
Ripon Kumar Saha ◽  
...  

2019 ◽  
Author(s):  
Nicholas Bernstein ◽  
Nicole Fong ◽  
Irene Lam ◽  
Margaret Roy ◽  
David G. Hendrickson ◽  
...  

AbstractSingle cell RNA-seq (scRNA-seq) measurements of gene expression enable an unprecedented high-resolution view into cellular state. However, current methods often result in two or more cells that share the same cell-identifying barcode; these “doublets” violate the fundamental premise of single cell technology and can lead to incorrect inferences. Here, we describe Solo, a semi-supervised deep learning approach that identifies doublets with greater accuracy than existing methods. Solo can be applied in combination with experimental doublet detection methods to further purify scRNA-seq data to true single cells beyond any previous approach.


Author(s):  
Guoyang Liu ◽  
Lan Tian ◽  
Weidong Zhou

Automatic seizure detection is of great significance for epilepsy diagnosis and alleviating the massive burden caused by manual inspection of long-term EEG. At present, most seizure detection methods are highly patient-dependent and have poor generalization performance. In this study, a novel patient-independent approach is proposed to effectively detect seizure onsets. First, the multi-channel EEG recordings are preprocessed by wavelet decomposition. Then, the Convolutional Neural Network (CNN) with proper depth works as an EEG feature extractor. Next, the obtained features are fed into a Bidirectional Long Short-Term Memory (BiLSTM) network to further capture the temporal variation characteristics. Finally, aiming to reduce the false detection rate (FDR) and improve the sensitivity, the postprocessing, including smoothing and collar, is performed on the outputs of the model. During the training stage, a novel channel perturbation technique is introduced to enhance the model generalization ability. The proposed approach is comprehensively evaluated on the CHB-MIT public scalp EEG database as well as a more challenging SH-SDU scalp EEG database we collected. Segment-based average accuracies of 97.51% and 93.70%, event-based average sensitivities of 86.51% and 89.89%, and average AUC-ROC of 90.82% and 90.75% are yielded on the CHB-MIT database and SH-SDU database, respectively.


2020 ◽  
Vol 17 (6) ◽  
pp. 935-946
Author(s):  
Jihene Younes ◽  
Hadhemi Achour ◽  
Emna Souissi ◽  
Ahmed Ferchichi

Language identification is an important task in natural language processing that consists in determining the language of a given text. It has increasingly picked the interest of researchers for the past few years, especially for code-switching informal textual content. In this paper, we focus on the identification of the Romanized user-generated Tunisian dialect on the social web. We segment and annotate a corpus extracted from social media and propose a deep learning approach for the identification task. We use a Bidirectional Long Short-Term Memory neural network with Conditional Random Fields decoding (BLSTM-CRF). For word embeddings, we combine word-character BLSTM vector representation and Fast Text embeddings that takes into consideration character n-gram features. The overall accuracy obtained is 98.65%.


2021 ◽  
Vol 11 (17) ◽  
pp. 7940
Author(s):  
Mohammed Al-Sarem ◽  
Abdullah Alsaeedi ◽  
Faisal Saeed ◽  
Wadii Boulila ◽  
Omair AmeerBakhsh

Spreading rumors in social media is considered under cybercrimes that affect people, societies, and governments. For instance, some criminals create rumors and send them on the internet, then other people help them to spread it. Spreading rumors can be an example of cyber abuse, where rumors or lies about the victim are posted on the internet to send threatening messages or to share the victim’s personal information. During pandemics, a large amount of rumors spreads on social media very fast, which have dramatic effects on people’s health. Detecting these rumors manually by the authorities is very difficult in these open platforms. Therefore, several researchers conducted studies on utilizing intelligent methods for detecting such rumors. The detection methods can be classified mainly into machine learning-based and deep learning-based methods. The deep learning methods have comparative advantages against machine learning ones as they do not require preprocessing and feature engineering processes and their performance showed superior enhancements in many fields. Therefore, this paper aims to propose a Novel Hybrid Deep Learning Model for Detecting COVID-19-related Rumors on Social Media (LSTM–PCNN). The proposed model is based on a Long Short-Term Memory (LSTM) and Concatenated Parallel Convolutional Neural Networks (PCNN). The experiments were conducted on an ArCOV-19 dataset that included 3157 tweets; 1480 of them were rumors (46.87%) and 1677 tweets were non-rumors (53.12%). The findings of the proposed model showed a superior performance compared to other methods in terms of accuracy, recall, precision, and F-score.


2021 ◽  
Vol 29 (3) ◽  
Author(s):  
Bennilo Fernandes ◽  
Kasiprasad Mannepalli

Designing the interaction among human language and a registered emotional database enables us to explore how the system performs and has multiple approaches for emotion detection in patient services. As of now, clustering techniques were primarily used in many prominent areas and in emotional speech recognition, even though it shows best results a new approach to the design is focused on Long Short-Term Memory (LSTM), Bi-Directional LSTM and Gated Recurrent Unit (GRU) as an estimation method for emotional Tamil datasets is available in this paper. A new approach of Deep Hierarchal LSTM/BiLSTM/GRU layer is designed to obtain the best result for long term learning voice dataset. Different combinations of deep learning hierarchal architecture like LSTM & GRU (DHLG), BiLSTM & GRU (DHBG), GRU & LSTM (DHGL), GRU & BiLSTM (DHGB) and dual GRU (DHGG) layer is designed with introduction of dropout layer to overcome the learning problem and gradient vanishing issues in emotional speech recognition. Moreover, to increase the design outcome within each emotional speech signal, various feature extraction combinations are utilized. From the analysis an average classification validity of the proposed DHGB model gives 82.86%, which is slightly higher than other models like DHGL (82.58), DHBG (82%), DHLG (81.14%) and DHGG (80%). Thus, by comparing all the models DHGB gives prominent outcome of 5% more than other four models with minimum training time and low dataset.


2019 ◽  
Vol 5 ◽  
pp. e210
Author(s):  
Ilia Sucholutsky ◽  
Apurva Narayan ◽  
Matthias Schonlau ◽  
Sebastian Fischmeister

In most areas of machine learning, it is assumed that data quality is fairly consistent between training and inference. Unfortunately, in real systems, data are plagued by noise, loss, and various other quality reducing factors. While a number of deep learning algorithms solve end-stage problems of prediction and classification, very few aim to solve the intermediate problems of data pre-processing, cleaning, and restoration. Long Short-Term Memory (LSTM) networks have previously been proposed as a solution for data restoration, but they suffer from a major bottleneck: a large number of sequential operations. We propose using attention mechanisms to entirely replace the recurrent components of these data-restoration networks. We demonstrate that such an approach leads to reduced model sizes by as many as two orders of magnitude, a 2-fold to 4-fold reduction in training times, and 95% accuracy for automotive data restoration. We also show in a case study that this approach improves the performance of downstream algorithms reliant on clean data.


Sign in / Sign up

Export Citation Format

Share Document