scholarly journals Deep Learning of Simultaneous Intracranial and Scalp EEG for Prediction, Detection, and Lateralization of Mesial Temporal Lobe Seizures

2021 ◽  
Vol 12 ◽  
Author(s):  
Zan Li ◽  
Madeline Fields ◽  
Fedor Panov ◽  
Saadi Ghatan ◽  
Bülent Yener ◽  
...  

In people with drug resistant epilepsy (DRE), seizures are unpredictable, often occurring with little or no warning. The unpredictability causes anxiety and much of the morbidity and mortality of seizures. In this work, 102 seizures of mesial temporal lobe onset were analyzed from 19 patients with DRE who had simultaneous intracranial EEG (iEEG) and scalp EEG as part of their surgical evaluation. The first aim of this paper was to develop machine learning models for seizure prediction and detection (i) using iEEG only, (ii) scalp EEG only and (iii) jointly analyzing both iEEG and scalp EEG. The second goal was to test if machine learning could detect a seizure on scalp EEG when that seizure was not detectable by the human eye (surface negative) but was seen in iEEG. The final question was to determine if the deep learning algorithm could correctly lateralize the seizure onset. The seizure detection and prediction problems were addressed jointly by training Deep Neural Networks (DNN) on 4 classes: non-seizure, pre-seizure, left mesial temporal onset seizure and right mesial temporal onset seizure. To address these aims, the classification accuracy was tested using two deep neural networks (DNN) against 3 different types of similarity graphs which used different time series of EEG data. The convolutional neural network (CNN) with the Waxman similarity graph yielded the highest accuracy across all EEG data (iEEG, scalp EEG and combined). Specifically, 1 second epochs of EEG were correctly assigned to their seizure, pre-seizure, or non-seizure category over 98% of the time. Importantly, the pre-seizure state was classified correctly in the vast majority of epochs (>97%). Detection from scalp EEG data alone of surface negative seizures and the seizures with the delayed scalp onset (the surface negative portion) was over 97%. In addition, the model accurately lateralized all of the seizures from scalp data, including the surface negative seizures. This work suggests that highly accurate seizure prediction and detection is feasible using either intracranial or scalp EEG data. Furthermore, surface negative seizures can be accurately predicted, detected and lateralized with machine learning even when they are not visible to the human eye.

Algorithms ◽  
2021 ◽  
Vol 14 (2) ◽  
pp. 39
Author(s):  
Carlos Lassance ◽  
Vincent Gripon ◽  
Antonio Ortega

Deep Learning (DL) has attracted a lot of attention for its ability to reach state-of-the-art performance in many machine learning tasks. The core principle of DL methods consists of training composite architectures in an end-to-end fashion, where inputs are associated with outputs trained to optimize an objective function. Because of their compositional nature, DL architectures naturally exhibit several intermediate representations of the inputs, which belong to so-called latent spaces. When treated individually, these intermediate representations are most of the time unconstrained during the learning process, as it is unclear which properties should be favored. However, when processing a batch of inputs concurrently, the corresponding set of intermediate representations exhibit relations (what we call a geometry) on which desired properties can be sought. In this work, we show that it is possible to introduce constraints on these latent geometries to address various problems. In more detail, we propose to represent geometries by constructing similarity graphs from the intermediate representations obtained when processing a batch of inputs. By constraining these Latent Geometry Graphs (LGGs), we address the three following problems: (i) reproducing the behavior of a teacher architecture is achieved by mimicking its geometry, (ii) designing efficient embeddings for classification is achieved by targeting specific geometries, and (iii) robustness to deviations on inputs is achieved via enforcing smooth variation of geometry between consecutive latent spaces. Using standard vision benchmarks, we demonstrate the ability of the proposed geometry-based methods in solving the considered problems.


2017 ◽  
Author(s):  
Najib J. Majaj ◽  
Denis G. Pelli

ABSTRACTToday many vision-science presentations employ machine learning, especially the version called “deep learning”. Many neuroscientists use machine learning to decode neural responses. Many perception scientists try to understand how living organisms recognize objects. To them, deep neural networks offer benchmark accuracies for recognition of learned stimuli. Originally machine learning was inspired by the brain. Today, machine learning is used as a statistical tool to decode brain activity. Tomorrow, deep neural networks might become our best model of brain function. This brief overview of the use of machine learning in biological vision touches on its strengths, weaknesses, milestones, controversies, and current directions. Here, we hope to help vision scientists assess what role machine learning should play in their research.


2017 ◽  
Vol 1 (3) ◽  
pp. 83 ◽  
Author(s):  
Chandrasegar Thirumalai ◽  
Ravisankar Koppuravuri

In this paper, we will use deep neural networks for predicting the bike sharing usage based on previous years usage data. We will use because deep neural nets for getting higher accuracy. Deep neural nets are quite different from other machine learning techniques; here we can add many numbers of hidden layers to improve the accuracy of our prediction and the model can be trained in the way we want such that we can achieve the results we want. Nowadays many AI experts will say that deep learning is the best AI technique available now and we can achieve some unbelievable results using this technique. Now we will use that technique to predict bike sharing usage of a rental company to make sure they can take good business decisions based on previous years data.


2021 ◽  
Author(s):  
Anwaar Ulhaq

Machine learning has grown in popularity and effectiveness over the last decade. It has become possible to solve complex problems, especially in artificial intelligence, due to the effectiveness of deep neural networks. While numerous books and countless papers have been written on deep learning, new researchers want to understand the field's history, current trends and envision future possibilities. This review paper will summarise the recorded work that resulted in such success and address patterns and prospects.


2018 ◽  
Author(s):  
Gary H. Chang ◽  
David T. Felson ◽  
Shangran Qiu ◽  
Terence D. Capellini ◽  
Vijaya B. Kolachalama

ABSTRACTBackground and objectiveIt remains difficult to characterize pain in knee joints with osteoarthritis solely by radiographic findings. We sought to understand how advanced machine learning methods such as deep neural networks can be used to analyze raw MRI scans and predict bilateral knee pain, independent of other risk factors.MethodsWe developed a deep learning framework to associate information from MRI slices taken from the left and right knees of subjects from the Osteoarthritis Initiative with bilateral knee pain. Model training was performed by first extracting features from two-dimensional (2D) sagittal intermediate-weighted turbo spin echo slices. The extracted features from all the 2D slices were subsequently combined to directly associate using a fused deep neural network with the output of interest as a binary classification problem.ResultsThe deep learning model resulted in predicting bilateral knee pain on test data with 70.1% mean accuracy, 51.3% mean sensitivity, and 81.6% mean specificity. Systematic analysis of the predictions on the test data revealed that the model performance was consistent across subjects of different Kellgren-Lawrence grades.ConclusionThe study demonstrates a proof of principle that a machine learning approach can be applied to associate MR images with bilateral knee pain.SIGNIFICANCE AND INNOVATIONKnee pain is typically considered as an early indicator of osteoarthritis (OA) risk. Emerging evidence suggests that MRI changes are linked to pre-clinical OA, thus underscoring the need for building image-based models to predict knee pain. We leveraged a state-of-the-art machine learning approach to associate raw MR images with bilateral knee pain, independent of other risk factors.


2020 ◽  
Author(s):  
Thomas R. Lane ◽  
Daniel H. Foil ◽  
Eni Minerali ◽  
Fabio Urbina ◽  
Kimberley M. Zorn ◽  
...  

<p>Machine learning methods are attracting considerable attention from the pharmaceutical industry for use in drug discovery and applications beyond. In recent studies we have applied multiple machine learning algorithms, modeling metrics and in some cases compared molecular descriptors to build models for individual targets or properties on a relatively small scale. Several research groups have used large numbers of datasets from public databases such as ChEMBL in order to evaluate machine learning methods of interest to them. The largest of these types of studies used on the order of 1400 datasets. We have now extracted well over 5000 datasets from CHEMBL for use with the ECFP6 fingerprint and comparison of our proprietary software Assay Central<sup>TM</sup> with random forest, k-Nearest Neighbors, support vector classification, naïve Bayesian, AdaBoosted decision trees, and deep neural networks (3 levels). Model performance <a>was</a> assessed using an array of five-fold cross-validation metrics including area-under-the-curve, F1 score, Cohen’s kappa and Matthews correlation coefficient. <a>Based on ranked normalized scores for the metrics or datasets all methods appeared comparable while the distance from the top indicated Assay Central<sup>TM</sup> and support vector classification were comparable. </a>Unlike prior studies which have placed considerable emphasis on deep neural networks (deep learning), no advantage was seen in this case where minimal tuning was performed of any of the methods. If anything, Assay Central<sup>TM</sup> may have been at a slight advantage as the activity cutoff for each of the over 5000 datasets representing over 570,000 unique compounds was based on Assay Central<sup>TM</sup>performance, but support vector classification seems to be a strong competitor. We also apply Assay Central<sup>TM</sup> to prospective predictions for PXR and hERG to further validate these models. This work currently appears to be the largest comparison of machine learning algorithms to date. Future studies will likely evaluate additional databases, descriptors and algorithms, as well as further refining methods for evaluating and comparing models. </p><p><b> </b></p>


Author(s):  
А.И. Сотников

Прогнозирование временных рядов стало очень интенсивной областью исследований, число которых в последние годы даже увеличивается. Глубокие нейронные сети доказали свою эффективность и достигают высокой точности во многих областях применения. По этим причинам в настоящее время они являются одним из наиболее широко используемых методов машинного обучения для решения проблем, связанных с большими данными. Time series forecasting has become a very intensive area of research, the number of which has even increased in recent years. Deep neural networks have been proven to be effective and achieve high accuracy in many applications. For these reasons, they are currently one of the most widely used machine learning methods for solving big data problems.


2020 ◽  
Vol 15 ◽  
Author(s):  
Zichao Chen ◽  
Qi Zhou ◽  
Aziz Khan Turlandi ◽  
Jordan Jill ◽  
Rixin Xiong ◽  
...  

: Deep Learning (DL) is a novel type of Machine Learning (ML) model. It is showing increasing promise in medicine, study and treatment of diseases and injuries, to assist in data classification, novel disease symptoms and complicated decision making. Deep learning is the form of machine learning typically implemented via multi-level neural networks. This work discuss the pros and cons of using DL in clinical cardiology that also apply in medicine in general, while proposing certain directions as the more viable for clinical use. DL models called deep neural networks (DNNs), recurrent neural networks (RNNs) and convolutional neural networks (CNNs) have been applied to arrhythmias, electrocardiogram, ultrasonic analysis, genomes and endomyocardial biopsy. Convincingly, the rusults of trained model are good, demonstrating the power of more expressive deep learning algorithms for clinical predictive modeling. In the future, more novel deep learning methods are expected to make a difference in the field of clinical medicines.


Acta Numerica ◽  
2021 ◽  
Vol 30 ◽  
pp. 203-248
Author(s):  
Mikhail Belkin

In the past decade the mathematical theory of machine learning has lagged far behind the triumphs of deep neural networks on practical challenges. However, the gap between theory and practice is gradually starting to close. In this paper I will attempt to assemble some pieces of the remarkable and still incomplete mathematical mosaic emerging from the efforts to understand the foundations of deep learning. The two key themes will be interpolation and its sibling over-parametrization. Interpolation corresponds to fitting data, even noisy data, exactly. Over-parametrization enables interpolation and provides flexibility to select a suitable interpolating model.As we will see, just as a physical prism separates colours mixed within a ray of light, the figurative prism of interpolation helps to disentangle generalization and optimization properties within the complex picture of modern machine learning. This article is written in the belief and hope that clearer understanding of these issues will bring us a step closer towards a general theory of deep learning and machine learning.


Sign in / Sign up

Export Citation Format

Share Document