scholarly journals Computer-Aided Intracranial EEG Signal Identification Method Based on a Multi-Branch Deep Learning Fusion Model and Clinical Validation

2021 ◽  
Vol 11 (5) ◽  
pp. 615
Author(s):  
Yiping Wang ◽  
Yang Dai ◽  
Zimo Liu ◽  
Jinjie Guo ◽  
Gongpeng Cao ◽  
...  

Surgical intervention or the control of drug-refractory epilepsy requires accurate analysis of invasive inspection intracranial EEG (iEEG) data. A multi-branch deep learning fusion model is proposed to identify epileptogenic signals from the epileptogenic area of the brain. The classical approach extracts multi-domain signal wave features to construct a time-series feature sequence and then abstracts it through the bi-directional long short-term memory attention machine (Bi-LSTM-AM) classifier. The deep learning approach uses raw time-series signals to build a one-dimensional convolutional neural network (1D-CNN) to achieve end-to-end deep feature extraction and signal detection. These two branches are integrated to obtain deep fusion features and results. Resampling is employed to split the imbalanced epileptogenic and non-epileptogenic samples into balanced subsets for clinical validation. The model is validated over two publicly available benchmark iEEG databases to verify its effectiveness on a private, large-scale, clinical stereo EEG database. The model achieves high sensitivity (97.78%), accuracy (97.60%), and specificity (97.42%) on the Bern–Barcelona database, surpassing the performance of existing state-of-the-art techniques. It is then demonstrated on a clinical dataset with an average intra-subject accuracy of 92.53% and cross-subject accuracy of 88.03%. The results suggest that the proposed method is a valuable and extremely robust approach to help researchers and clinicians develop an automated method to identify the source of iEEG signals.

2021 ◽  
Vol 13 (24) ◽  
pp. 5000
Author(s):  
Felix Reuß ◽  
Isabella Greimeister-Pfeil ◽  
Mariette Vreugdenhil ◽  
Wolfgang Wagner

To ensure future food security, improved agricultural management approaches are required. For many of those applications, precise knowledge of the distribution of crop types is essential. Various machine and deep learning models have been used for automated crop classification using microwave remote sensing time series. However, the application of these approaches on a large spatial and temporal scale is barely investigated. In this study, the performance of two frequently used algorithms, Long Short-Term Memory (LSTM) networks and Random Forest (RF), for crop classification based on Sentinel-1 time series and meteorological data on a large spatial and temporal scale is assessed. For data from Austria, the Netherlands, and France and the years 2015–2019, scenarios with different spatial and temporal scales were defined. To quantify the complexity of these scenarios, the Fisher Discriminant measurement F1 (FDR1) was used. The results demonstrate that both classifiers achieve similar results for simple classification tasks with low FDR1 values. With increasing FDR1 values, however, LSTM networks outperform RF. This suggests that the ability of LSTM networks to learn long-term dependencies and identify the relation between radar time series and meteorological data becomes increasingly important for more complex applications. Thus, the study underlines the importance of deep learning models, including LSTM networks, for large-scale applications.


2021 ◽  
Author(s):  
Kaiwen Sheng ◽  
Peng Qu ◽  
Le Yang ◽  
Xiaofei Liu ◽  
Liuyuan He ◽  
...  

AbstractComputational neural models are essential tools for neuroscientists to study the functional roles of single neurons or neural circuits. With the recent advances in experimental techniques, there is a growing demand to build up neural models at single neuron or large-scale circuit levels. A long-standing challenge to build up such models lies in tuning the free parameters of the models to closely reproduce experimental recordings. There are many advanced machine-learning-based methods developed recently for parameter tuning, but many of them are task-specific or requires onerous manual interference. There lacks a general and fully-automated method since now. Here, we present a Long Short-Term Memory (LSTM)-based deep learning method, General Neural Estimator (GNE), to fully automate the parameter tuning procedure, which can be directly applied to both single neuronal models and large-scale neural circuits. We made comprehensive comparisons with many advanced methods, and GNE showed outstanding performance on both synthesized data and experimental data. Finally, we proposed a roadmap centered on GNE to help guide neuroscientists to computationally reconstruct single neurons and neural circuits, which might inspire future brain reconstruction techniques and corresponding experimental design. The code of our work will be publicly available upon acceptance of this paper.


Sensors ◽  
2021 ◽  
Vol 21 (8) ◽  
pp. 2852
Author(s):  
Parvathaneni Naga Srinivasu ◽  
Jalluri Gnana SivaSai ◽  
Muhammad Fazal Ijaz ◽  
Akash Kumar Bhoi ◽  
Wonjoon Kim ◽  
...  

Deep learning models are efficient in learning the features that assist in understanding complex patterns precisely. This study proposed a computerized process of classifying skin disease through deep learning based MobileNet V2 and Long Short Term Memory (LSTM). The MobileNet V2 model proved to be efficient with a better accuracy that can work on lightweight computational devices. The proposed model is efficient in maintaining stateful information for precise predictions. A grey-level co-occurrence matrix is used for assessing the progress of diseased growth. The performance has been compared against other state-of-the-art models such as Fine-Tuned Neural Networks (FTNN), Convolutional Neural Network (CNN), Very Deep Convolutional Networks for Large-Scale Image Recognition developed by Visual Geometry Group (VGG), and convolutional neural network architecture that expanded with few changes. The HAM10000 dataset is used and the proposed method has outperformed other methods with more than 85% accuracy. Its robustness in recognizing the affected region much faster with almost 2× lesser computations than the conventional MobileNet model results in minimal computational efforts. Furthermore, a mobile application is designed for instant and proper action. It helps the patient and dermatologists identify the type of disease from the affected region’s image at the initial stage of the skin disease. These findings suggest that the proposed system can help general practitioners efficiently and effectively diagnose skin conditions, thereby reducing further complications and morbidity.


Symmetry ◽  
2021 ◽  
Vol 13 (7) ◽  
pp. 1156
Author(s):  
Mohamed Yusuf Hassan

The most effective techniques for predicting time series patterns include machine learning and classical time series methods. The aim of this study is to search for the best artificial intelligence and classical forecasting techniques that can predict the spread of acute respiratory infection (ARI) and pneumonia among under-five-year old children in Somaliland. The techniques used in the study include seasonal autoregressive integrated moving averages (SARIMA), mixture transitions distribution (MTD), and long short term memory (LSTM) deep learning. The data used in the study were monthly observations collected from five regions in Somaliland from 2011–2014. Prediction results from the three best competing models are compared by using root mean square error (RMSE) and absolute mean deviation (MAD) accuracy measures. Results have shown that the deep learning LSTM and MTD models slightly outperformed the classical SARIMA model in predicting ARI values.


Author(s):  
Yuheng Hu ◽  
Yili Hong

Residents often rely on newspapers and television to gather hyperlocal news for community awareness and engagement. More recently, social media have emerged as an increasingly important source of hyperlocal news. Thus far, the literature on using social media to create desirable societal benefits, such as civic awareness and engagement, is still in its infancy. One key challenge in this research stream is to timely and accurately distill information from noisy social media data streams to community members. In this work, we develop SHEDR (social media–based hyperlocal event detection and recommendation), an end-to-end neural event detection and recommendation framework with a particular use case for Twitter to facilitate residents’ information seeking of hyperlocal events. The key model innovation in SHEDR lies in the design of the hyperlocal event detector and the event recommender. First, we harness the power of two popular deep neural network models, the convolutional neural network (CNN) and long short-term memory (LSTM), in a novel joint CNN-LSTM model to characterize spatiotemporal dependencies for capturing unusualness in a region of interest, which is classified as a hyperlocal event. Next, we develop a neural pairwise ranking algorithm for recommending detected hyperlocal events to residents based on their interests. To alleviate the sparsity issue and improve personalization, our algorithm incorporates several types of contextual information covering topic, social, and geographical proximities. We perform comprehensive evaluations based on two large-scale data sets comprising geotagged tweets covering Seattle and Chicago. We demonstrate the effectiveness of our framework in comparison with several state-of-the-art approaches. We show that our hyperlocal event detection and recommendation models consistently and significantly outperform other approaches in terms of precision, recall, and F-1 scores. Summary of Contribution: In this paper, we focus on a novel and important, yet largely underexplored application of computing—how to improve civic engagement in local neighborhoods via local news sharing and consumption based on social media feeds. To address this question, we propose two new computational and data-driven methods: (1) a deep learning–based hyperlocal event detection algorithm that scans spatially and temporally to detect hyperlocal events from geotagged Twitter feeds; and (2) A personalized deep learning–based hyperlocal event recommender system that systematically integrates several contextual cues such as topical, geographical, and social proximity to recommend the detected hyperlocal events to potential users. We conduct a series of experiments to examine our proposed models. The outcomes demonstrate that our algorithms are significantly better than the state-of-the-art models and can provide users with more relevant information about the local neighborhoods that they live in, which in turn may boost their community engagement.


2021 ◽  
Author(s):  
Pradeep Lall ◽  
Tony Thomas ◽  
Ken Blecker

Abstract Prognostics and Remaining Useful Life (RUL) estimations of complex systems are essential to operational safety, increased efficiency, and help to schedule maintenance proactively. Modeling the remaining useful life of a system with many complexities is possible with the rapid development in the field of deep learning as a computational technique for failure prediction. Deep learning can adapt to multivariate parameters complex and nonlinear behavior, which is difficult using traditional time-series models for forecasting and prediction purposes. In this paper, a deep learning approach based on Long Short-Term Memory (LSTM) network is used to predict the remaining useful life of the PCB at different conditions of temperature and vibration. This technique can identify the different underlying patterns in the time series that can predict the RUL. This study involves feature vector identification and RUL estimations for SAC305, SAC105, and Tin Lead solder PCBs under different vibration levels and temperature conditions. The acceleration levels of vibration are fixed at 5g and 10g, while the temperature levels are 55°C and 100°C. The test board is a multilayer FR4 configuration with JEDEC standard dimensions consists of twelve packages arranged in a rectangular pattern. Strain signals are acquired from the backside of the PCB at symmetric locations to identify the failure of all the packages during vibration. The strain signals are resistance values that are acquired simultaneously during the experiment until the failure of most of the packages on the board. The feature vectors are identified from statistical analysis on the strain signals frequency and instantaneous frequency components. The principal component analysis is used as a data reduction technique to identify the different patterns produced from the four strain signals with failures of the packages during vibration. LSTM deep learning method is used to model the RUL of the packages at different individual operating conditions of vibration for all three solder materials involved in this study. A combined model for RUL prediction for a material that can take care of the changes in the operating conditions is also modeled for each material.


2020 ◽  
Author(s):  
Yuan Yuan ◽  
Lei Lin

Satellite image time series (SITS) classification is a major research topic in remote sensing and is relevant for a wide range of applications. Deep learning approaches have been commonly employed for SITS classification and have provided state-of-the-art performance. However, deep learning methods suffer from overfitting when labeled data is scarce. To address this problem, we propose a novel self-supervised pre-training scheme to initialize a Transformer-based network by utilizing large-scale unlabeled data. In detail, the model is asked to predict randomly contaminated observations given an entire time series of a pixel. The main idea of our proposal is to leverage the inherent temporal structure of satellite time series to learn general-purpose spectral-temporal representations related to land cover semantics. Once pre-training is completed, the pre-trained network can be further adapted to various SITS classification tasks by fine-tuning all the model parameters on small-scale task-related labeled data. In this way, the general knowledge and representations about SITS can be transferred to a label-scarce task, thereby improving the generalization performance of the model as well as reducing the risk of overfitting. Comprehensive experiments have been carried out on three benchmark datasets over large study areas. Experimental results demonstrate the effectiveness of the proposed method, leading to a classification accuracy increment up to 1.91% to 6.69%. <div><b>This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessible.</b></div>


F1000Research ◽  
2021 ◽  
Vol 10 ◽  
pp. 1010
Author(s):  
Nouar AlDahoul ◽  
Hezerul Abdul Karim ◽  
Abdulaziz Saleh Ba Wazir ◽  
Myles Joshua Toledo Tan ◽  
Mohammad Faizal Ahmad Fauzi

Background: Laparoscopy is a surgery performed in the abdomen without making large incisions in the skin and with the aid of a video camera, resulting in laparoscopic videos. The laparoscopic video is prone to various distortions such as noise, smoke, uneven illumination, defocus blur, and motion blur. One of the main components in the feedback loop of video enhancement systems is distortion identification, which automatically classifies the distortions affecting the videos and selects the video enhancement algorithm accordingly. This paper aims to address the laparoscopic video distortion identification problem by developing fast and accurate multi-label distortion classification using a deep learning model. Current deep learning solutions based on convolutional neural networks (CNNs) can address laparoscopic video distortion classification, but they learn only spatial information. Methods: In this paper, utilization of both spatial and temporal features in a CNN-long short-term memory (CNN-LSTM) model is proposed as a novel solution to enhance the classification. First, pre-trained ResNet50 CNN was used to extract spatial features from each video frame by transferring representation from large-scale natural images to laparoscopic images. Next, LSTM was utilized to consider the temporal relation between the features extracted from the laparoscopic video frames to produce multi-label categories. A novel laparoscopic video dataset proposed in the ICIP2020 challenge was used for training and evaluation of the proposed method. Results: The experiments conducted show that the proposed CNN-LSTM outperforms the existing solutions in terms of accuracy (85%), and F1-score (94.2%). Additionally, the proposed distortion identification model is able to run in real-time with low inference time (0.15 sec). Conclusions: The proposed CNN-LSTM model is a feasible solution to be utilized in laparoscopic videos for distortion identification.


Mathematics ◽  
2020 ◽  
Vol 8 (7) ◽  
pp. 1078
Author(s):  
Ruxandra Stoean ◽  
Catalin Stoean ◽  
Miguel Atencia ◽  
Roberto Rodríguez-Labrada ◽  
Gonzalo Joya

Uncertainty quantification in deep learning models is especially important for the medical applications of this complex and successful type of neural architectures. One popular technique is Monte Carlo dropout that gives a sample output for a record, which can be measured statistically in terms of average probability and variance for each diagnostic class of the problem. The current paper puts forward a convolutional–long short-term memory network model with a Monte Carlo dropout layer for obtaining information regarding the model uncertainty for saccadic records of all patients. These are next used in assessing the uncertainty of the learning model at the higher level of sets of multiple records (i.e., registers) that are gathered for one patient case by the examining physician towards an accurate diagnosis. Means and standard deviations are additionally calculated for the Monte Carlo uncertainty estimates of groups of predictions. These serve as a new collection where a random forest model can perform both classification and ranking of variable importance. The approach is validated on a real-world problem of classifying electrooculography time series for an early detection of spinocerebellar ataxia 2 and reaches an accuracy of 88.59% in distinguishing between the three classes of patients.


Atmosphere ◽  
2020 ◽  
Vol 11 (5) ◽  
pp. 487 ◽  
Author(s):  
Trang Thi Kieu Tran ◽  
Taesam Lee ◽  
Ju-Young Shin ◽  
Jong-Suk Kim ◽  
Mohamad Kamruzzaman

Time series forecasting of meteorological variables such as daily temperature has recently drawn considerable attention from researchers to address the limitations of traditional forecasting models. However, a middle-range (e.g., 5–20 days) forecasting is an extremely challenging task to get reliable forecasting results from a dynamical weather model. Nevertheless, it is challenging to develop and select an accurate time-series prediction model because it involves training various distinct models to find the best among them. In addition, selecting an optimum topology for the selected models is important too. The accurate forecasting of maximum temperature plays a vital role in human life as well as many sectors such as agriculture and industry. The increase in temperature will deteriorate the highland urban heat, especially in summer, and have a significant influence on people’s health. We applied meta-learning principles to optimize the deep learning network structure for hyperparameter optimization. In particular, the genetic algorithm (GA) for meta-learning was used to select the optimum architecture for the network used. The dataset was used to train and test three different models, namely the artificial neural network (ANN), recurrent neural network (RNN), and long short-term memory (LSTM). Our results demonstrate that the hybrid model of an LSTM network and GA outperforms other models for the long lead time forecasting. Specifically, LSTM forecasts have superiority over RNN and ANN for 15-day-ahead in summer with the root mean square error (RMSE) value of 2.719 (°C).


Sign in / Sign up

Export Citation Format

Share Document