scholarly journals Automated Deep Neural Network Approach for Detection of Epileptic Seizures

2021 ◽  
Author(s):  
◽  
Nadia Moazen

In this thesis, I focus on exploiting electroencephalography (EEG) signals for early seizure diagnosis in patients. This process is based on a powerful deep learning algorithm for times series data called Long Short-Term Memory (LSTM) network. Since manual and visual inspection (detection) of epileptic seizure through the electroencephalography (EEG) signal by expert neurologists is time-consuming, work-intensive and error-prone and it might take a couple hours for experts to analyze a single patient record and to do recognition when immediate action is needed to be taken. This thesis proposes a reliable automatic seizure/non-seizure classification method that could facilitate the identification process of characteristic epileptic patterns, such as pre-ictal spikes, seizures and determination of seizure frequency, seizure type, etc. In order to recognize epileptic seizure accurately, the proposed model exploits the temporal dependencies in the EEG data. Experiments on clinical data present that this method achieves a high seizure prediction accuracy and maintains reliable performance. This thesis also finds the most efficient lengths of EEG recording for highest accuracies of different classification in the automated seizure detection realm. It could help non-experts to predict the seizure more comprehensively and bring awareness to patients and caregivers of upcoming seizures, enhancing the daily lives of patients against unpredictable occurrence of seizures.

2020 ◽  
Vol 34 (4) ◽  
pp. 437-444
Author(s):  
Lingyan Ou ◽  
Ling Chen

Corporate internet reporting (CIR) has such advantages as the strong timeliness, large amount, and wide coverage of financial information. However, the CIR, like any other online information, faces various risks. With the aid of the increasingly sophisticated artificial intelligence (AI) technology, this paper proposes an improved deep learning algorithm for the prediction of CIR risks, aiming to improve the accuracy of CIR risk prediction. After building a reasonable evaluation index system (EIS) for CIR risks, the data involved in risk rating and the prediction of risk transmission effect (RTE) were subject to structured feature extraction and time series construction. Next, a combinatory CIR risk prediction model was established by combining the autoregressive moving average (ARMA) model with long short-term memory (LSTM). The former is good at depicting linear series, and the latter excels in describing nonlinear series. Experimental results demonstrate the effectiveness of the ARMA-LSTM model. The research findings provide a good reference for applying AI technology in risk prediction of other areas.


2021 ◽  
Vol 263 (1) ◽  
pp. 5552-5554
Author(s):  
Kim Deukha ◽  
Seongwook Jeon ◽  
Won June Lee ◽  
Junhong Park

Intraocular pressure (IOP) measurement is one of the basic tests performed in ophthalmology and is known to be an important risk factor for the development and progression of glaucoma. Measurement of IOP is important for assessing response to treatment and monitoring the progression of the disease in glaucoma. In this study, we investigate a method for measuring IOP using the characteristics of vibration propagation generated when the structure is in contact with the eyeball. The response was measured using an accelerometer and a force sensitive resistor to determine the correlation between the IOP. Experiment was performed using ex-vivo porcine eyes. To control the IOP, a needle of the infusion line connected with the water bottle was inserted into the porcine eyes through the limbus. A cross correlation analysis between the accelerometer and the force sensitive resistor was performed to derive a vibration factor that indicate the change in IOP. In order to analyze the degree of influence of biological tissues such as the eyelid, silicon was placed between the structure and the eyeball. The Long Short-Term Memory (LSTM) deep learning algorithm was used to predict IOP based on the vibration factor.


Author(s):  
Luotong Wang ◽  
Li Qu ◽  
Longshu Yang ◽  
Yiying Wang ◽  
Huaiqiu Zhu

AbstractNanopore sequencing is regarded as one of the most promising third-generation sequencing (TGS) technologies. Since 2014, Oxford Nanopore Technologies (ONT) has developed a series of devices based on nanopore sequencing to produce very long reads, with an expected impact on genomics. However, the nanopore sequencing reads are susceptible to a fairly high error rate owing to the difficulty in identifying the DNA bases from the complex electrical signals. Although several basecalling tools have been developed for nanopore sequencing over the past years, it is still challenging to correct the sequences after applying the basecalling procedure. In this study, we developed an open-source DNA basecalling reviser, NanoReviser, based on a deep learning algorithm to correct the basecalling errors introduced by current basecallers provided by default. In our module, we re-segmented the raw electrical signals based on the basecalled sequences provided by the default basecallers. By employing convolution neural networks (CNNs) and bidirectional long short-term memory (Bi-LSTM) networks, we took advantage of the information from the raw electrical signals and the basecalled sequences from the basecallers. Our results showed NanoReviser, as a post-basecalling reviser, significantly improving the basecalling quality. After being trained on standard ONT sequencing reads from public E. coli and human NA12878 datasets, NanoReviser reduced the sequencing error rate by over 5% for both the E. coli dataset and the human dataset. The performance of NanoReviser was found to be better than those of all current basecalling tools. Furthermore, we analyzed the modified bases of the E. coli dataset and added the methylation information to train our module. With the methylation annotation, NanoReviser reduced the error rate by 7% for the E. coli dataset and specifically reduced the error rate by over 10% for the regions of the sequence rich in methylated bases. To the best of our knowledge, NanoReviser is the first post-processing tool after basecalling to accurately correct the nanopore sequences without the time-consuming procedure of building the consensus sequence. The NanoReviser package is freely available at https://github.com/pkubioinformatics/NanoReviser.


2020 ◽  
pp. 158-161
Author(s):  
Chandraprabha S ◽  
Pradeepkumar G ◽  
Dineshkumar Ponnusamy ◽  
Saranya M D ◽  
Satheesh Kumar S ◽  
...  

This paper outfits artificial intelligence based real time LDR data which is implemented in various applications like indoor lightning, and places where enormous amount of heat is produced, agriculture to increase the crop yield, Solar plant for solar irradiance Tracking. For forecasting the LDR information. The system uses a sensor that can measure the light intensity by means of LDR. The data acquired from sensors are posted in an Adafruit cloud for every two seconds time interval using Node MCU ESP8266 module. The data is also presented on adafruit dashboard for observing sensor variables. A Long short-term memory is used for setting up the deep learning. LSTM module uses the recorded historical data from adafruit cloud which is paired with Node MCU in order to obtain the real-time long-term time series sensor variables that is measured in terms of light intensity. Data is extracted from the cloud for processing the data analytics later the deep learning model is implemented in order to predict future light intensity values.


Author(s):  
Tarik A. Rashid ◽  
Mohammad K. Hassan ◽  
Mokhtar Mohammadi ◽  
Kym Fraser

Recently, the population of the world has increased along with health problems. Diabetes mellitus disease as an example causes issues to the health of many patients globally. The task of this chapter is to develop a dynamic and intelligent decision support system for patients with different diseases, and it aims at examining machine-learning techniques supported by optimization techniques. Artificial neural networks have been used in healthcare for several decades. Most research works utilize multilayer layer perceptron (MLP) trained with back propagation (BP) learning algorithm to achieve diabetes mellitus classification. Nonetheless, MLP has some drawbacks, such as, convergence, which can be slow; local minima can affect the training process. It is hard to scale and cannot be used with time series data sets. To overcome these drawbacks, long short-term memory (LSTM) is suggested, which is a more advanced form of recurrent neural networks. In this chapter, adaptable LSTM trained with two optimizing algorithms instead of the back propagation learning algorithm is presented. The optimization algorithms are biogeography-based optimization (BBO) and genetic algorithm (GA). Dataset is collected locally and another benchmark dataset is used as well. Finally, the datasets fed into adaptable models; LSTM with BBO (LSTMBBO) and LSTM with GA (LSTMGA) for classification purposes. The experimental and testing results are compared and they are promising. This system helps physicians and doctors to provide proper health treatment for patients with diabetes mellitus. Details of source code and implementation of our system can be obtained in the following link “https://github.com/hamakamal/LSTM.”


2021 ◽  
Vol 54 (3-4) ◽  
pp. 439-445
Author(s):  
Chih-Ta Yen ◽  
Sheng-Nan Chang ◽  
Cheng-Hong Liao

This study used photoplethysmography signals to classify hypertensive into no hypertension, prehypertension, stage I hypertension, and stage II hypertension. There are four deep learning models are compared in the study. The difficulties in the study are how to find the optimal parameters such as kernel, kernel size, and layers in less photoplethysmographyt (PPG) training data condition. PPG signals were used to train deep residual network convolutional neural network (ResNetCNN) and bidirectional long short-term memory (BILSTM) to determine the optimal operating parameters when each dataset consisted of 2100 data points. During the experiment, the proportion of training and testing datasets was 8:2. The model demonstrated an optimal classification accuracy of 76% when the testing dataset was used.


Sign in / Sign up

Export Citation Format

Share Document