Using Dictionary Pair Learning for Seizure Detection

2019 ◽  
Vol 29 (04) ◽  
pp. 1850005 ◽  
Author(s):  
Xin Ma ◽  
Nana Yu ◽  
Weidong Zhou

Automatic seizure detection is extremely important in the monitoring and diagnosis of epilepsy. The paper presents a novel method based on dictionary pair learning (DPL) for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. First, for the EEG data, wavelet filtering and differential filtering are applied, and the kernel function is performed to make the signal linearly separable. In DPL, the synthesis dictionary and analysis dictionary are learned jointly from original training samples with alternating minimization method, and sparse coefficients are obtained by using of linear projection instead of costly [Formula: see text]-norm or [Formula: see text]-norm optimization. At last, the reconstructed residuals associated with seizure and nonseizure sub-dictionary pairs are calculated as the decision values, and the postprocessing is performed for improving the recognition rate and reducing the false detection rate of the system. A total of 530[Formula: see text]h from 20 patients with 81 seizures were used to evaluate the system. Our proposed method has achieved an average segment-based sensitivity of 93.39%, specificity of 98.51%, and event-based sensitivity of 96.36% with false detection rate of 0.236/h.

2015 ◽  
Vol 25 (02) ◽  
pp. 1550003 ◽  
Author(s):  
Shasha Yuan ◽  
Weidong Zhou ◽  
Qi Yuan ◽  
Xueli Li ◽  
Qi Wu ◽  
...  

Automatic seizure detection is of great significance in the monitoring and diagnosis of epilepsy. In this study, a novel method is proposed for automatic seizure detection in intracranial electroencephalogram (iEEG) recordings based on kernel collaborative representation (KCR). Firstly, the EEG recordings are divided into 4s epochs, and then wavelet decomposition with five scales is performed. After that, detail signals at scales 3, 4 and 5 are selected to be sparsely coded over the training sets using KCR. In KCR, l2-minimization replaces l1-minimization and the sparse coefficients are computed with regularized least square (RLS), and a kernel function is utilized to improve the separability between seizure and nonseizure signals. The reconstructed residuals of each EEG epoch associated with seizure and nonseizure training samples are compared and EEG epochs are categorized as the class that minimizes the reconstructed residual. At last, a multi-decision rule is applied to obtain the final detection decision. In total, 595 h of iEEG recordings from 21 patients with 87 seizures are employed to evaluate the system. The average sensitivity of 94.41%, specificity of 96.97%, and false detection rate of 0.26/h are achieved. The seizure detection system based on KCR yields both a high sensitivity and a low false detection rate for long-term EEG.


2020 ◽  
Vol 30 (04) ◽  
pp. 2050019 ◽  
Author(s):  
Yang Li ◽  
Zuyi Yu ◽  
Yang Chen ◽  
Chunfeng Yang ◽  
Yue Li ◽  
...  

The automatic seizure detection system can effectively help doctors to monitor and diagnose epilepsy thus reducing their workload. Many outstanding studies have given good results in the two-class seizure detection problems, but most of them are based on hand-wrought feature extraction. This study proposes an end-to-end automatic seizure detection system based on deep learning, which does not require heavy preprocessing on the EEG data or feature engineering. The fully convolutional network with three convolution blocks is first used to learn the expressive seizure characteristics from EEG data. Then these robust EEG features pertinent to seizures are presented as an input to the Nested Long Short-Term Memory (NLSTM) model to explore the inherent temporal dependencies in EEG signals. Lastly, the high-level features obtained from the NLSTM model are fed into the softmax layer to output predicted labels. The proposed method yields an accuracy range of 98.44–100% in 10 different experiments based on the Bonn University database. A larger EEG database is then used to evaluate the performance of the proposed method in real-life situations. The average sensitivity of 97.47%, specificity of 96.17%, and false detection rate of 0.487 per hour are yielded. For CHB–MIT Scalp EEG database, the proposed model also achieves a segment-level sensitivity of 94.07% with a false detection rate of 0.66 per hour. The excellent results obtained on three different EEG databases demonstrate that the proposed method has good robustness and generalization power under ideal and real-life conditions.


2019 ◽  
Vol 29 (10) ◽  
pp. 1950021 ◽  
Author(s):  
Chengfa Sun ◽  
Hui Cui ◽  
Weidong Zhou ◽  
Weiwei Nie ◽  
Xiuying Wang ◽  
...  

Imbalance data classification is a challenging task in automatic seizure detection from electroencephalogram (EEG) recordings when the durations of non-seizure periods are much longer than those of seizure activities. An imbalanced learning model is proposed in this paper to improve the identification of seizure events in long-term EEG signals. To better represent the underlying microstructure distributions of EEG signals while preserving the non-stationary nature, discrete wavelet transform (DWT) and uniform 1D-LBP feature extraction procedure are introduced. A learning framework is then designed by the ensemble of weakly trained support vector machines (SVMs). Under-sampling is employed to split the imbalanced seizure and non-seizure samples into multiple balanced subsets where each of them is utilized to train an individual SVM classifier. The weak SVMs are incorporated to build a strong classifier which emphasizes seizure samples and in the meantime analyzing the imbalanced class distribution of EEG data. Final seizure detection results are obtained in a multi-level decision fusion process by considering temporal and frequency factors. The model was validated over two long-term and one short-term public EEG databases. The model achieved a [Formula: see text]-mean of 97.14% with respect to epoch-level assessment, an event-level sensitivity of 96.67%, and a false detection rate of 0.86/h on the long-term intracranial database. An epoch-level [Formula: see text]-mean of 95.28% and event-level false detection rate of 0.81/h were yielded over the long-term scalp database. The comparisons with 14 published methods demonstrated the improved detection performance for imbalanced EEG signals and the generalizability of the proposed model.


2016 ◽  
Vol 26 (01) ◽  
pp. 1550035 ◽  
Author(s):  
Junhui Li ◽  
Weidong Zhou ◽  
Shasha Yuan ◽  
Yanli Zhang ◽  
Chengcheng Li ◽  
...  

Automatic seizure detection has played an important role in the monitoring, diagnosis and treatment of epilepsy. In this paper, a patient specific method is proposed for seizure detection in the long-term intracranial electroencephalogram (EEG) recordings. This seizure detection method is based on sparse representation with online dictionary learning and elastic net constraint. The online learned dictionary could sparsely represent the testing samples more accurately, and the elastic net constraint which combines the 11-norm and 12-norm not only makes the coefficients sparse but also avoids over-fitting problem. First, the EEG signals are preprocessed using wavelet filtering and differential filtering, and the kernel function is applied to make the samples closer to linearly separable. Then the dictionaries of seizure and nonseizure are respectively learned from original ictal and interictal training samples with online dictionary optimization algorithm to compose the training dictionary. After that, the test samples are sparsely coded over the learned dictionary and the residuals associated with ictal and interictal sub-dictionary are calculated, respectively. Eventually, the test samples are classified as two distinct categories, seizure or nonseizure, by comparing the reconstructed residuals. The average segment-based sensitivity of 95.45%, specificity of 99.08%, and event-based sensitivity of 94.44% with false detection rate of 0.23/h and average latency of -5.14 s have been achieved with our proposed method.


2019 ◽  
Vol 22 (13) ◽  
pp. 2907-2921 ◽  
Author(s):  
Xinwen Gao ◽  
Ming Jian ◽  
Min Hu ◽  
Mohan Tanniru ◽  
Shuaiqing Li

With the large-scale construction of urban subways, the detection of tunnel defects becomes particularly important. Due to the complexity of tunnel environment, it is difficult for traditional tunnel defect detection algorithms to detect such defects quickly and accurately. This article presents a deep learning FCN-RCNN model that can detect multiple tunnel defects quickly and accurately. The algorithm uses a Faster RCNN algorithm, Adaptive Border ROI boundary layer and a three-layer structure of the FCN algorithm. The Adaptive Border ROI boundary layer is used to reduce data set redundancy and difficulties in identifying interference during data set creation. The algorithm is compared with single FCN algorithm with no Adaptive Border ROI for different defect types. The results show that our defect detection algorithm not only addresses interference due to segment patching, pipeline smears and obstruction but also the false detection rate decreases from 0.371, 0.285, 0.307 to 0.0502, respectively. Finally, corrected by cylindrical projection model, the false detection rate is further reduced from 0.0502 to 0.0190 and the identification accuracy of water leakage defects is improved.


2018 ◽  
Vol 2018 ◽  
pp. 1-11 ◽  
Author(s):  
Xun Li ◽  
Yao Liu ◽  
Zhengfan Zhao ◽  
Yue Zhang ◽  
Li He

Vehicle detection is expected to be robust and efficient in various scenes. We propose a multivehicle detection method, which consists of YOLO under the Darknet framework. We also improve the YOLO-voc structure according to the change of the target scene and traffic flow. The classification training model is obtained based on ImageNet and the parameters are fine-tuned according to the training results and the vehicle characteristics. Finally, we obtain an effective YOLO-vocRV network for road vehicles detection. In order to verify the performance of our method, the experiment is carried out on different vehicle flow states and compared with the classical YOLO-voc, YOLO 9000, and YOLO v3. The experimental results show that our method achieves the detection rate of 98.6% in free flow state, 97.8% in synchronous flow state, and 96.3% in blocking flow state, respectively. In addition, our proposed method has less false detection rate than previous works and shows good robustness.


Author(s):  
Yuqing Zhao ◽  
Jinlu Jia ◽  
Di Liu ◽  
Yurong Qian

Aerial image-based target detection has problems such as low accuracy in multiscale target detection situations, slow detection speed, missed targets and falsely detected targets. To solve this problem, this paper proposes a detection algorithm based on the improved You Only Look Once (YOLO)v3 network architecture from the perspective of model efficiency and applies it to multiscale image-based target detection. First, the K-means clustering algorithm is used to cluster an aerial dataset and optimize the anchor frame parameters of the network to improve the effectiveness of target detection. Second, the feature extraction method of the algorithm is improved, and a feature fusion method is used to establish a multiscale (large-, medium-, and small-scale) prediction layer, which mitigates the problem of small target information loss in deep networks and improves the detection accuracy of the algorithm. Finally, label regularization processing is performed on the predicted value, the generalized intersection over union (GIoU) is used as the bounding box regression loss function, and the focal loss function is integrated into the bounding box confidence loss function, which not only improves the target detection accuracy but also effectively reduces the false detection rate and missed target rate of the algorithm. An experimental comparison on the RSOD and NWPU VHR-10 aerial datasets shows that the detection effect of high-efficiency YOLO (HE-YOLO) is significantly improved compared with that of YOLOv3, and the average detection accuracies are increased by 8.92% and 7.79% on the two datasets, respectively. The algorithm not only shows better detection performance for multiscale targets but also reduces the missed target rate and false detection rate and has good robustness and generalizability.


2014 ◽  
Vol 971-973 ◽  
pp. 1449-1453
Author(s):  
Zuo Wei Huang ◽  
Shu Guang Wu ◽  
Tao Xin Zhang

Hyperspectral remote sensing is the multi-dimensional information obtaining technology,which combines target detection and spectral imaging technology together, In order to accord with the condition of hyperspectral imagery,the paper developed an optimized ICA algorithm for change detection to describe the statistical distribution of the data. By processing these abundance maps, change of different classes of objects can be obtained..A approach is capable of self-adaptation, and can be applied to hyperspectral images with different characteristics. Experiment results demonstrate that the ICA-based hyperspectral change detection performs better than other traditional methods with a high detection rate and a low false detection rate.


2020 ◽  
Vol 635 ◽  
pp. A194 ◽  
Author(s):  
David Mary ◽  
Roland Bacon ◽  
Simon Conseil ◽  
Laure Piqueras ◽  
Antony Schutz

Context. One of the major science cases of the Multi Unit Spectroscopic Explorer (MUSE) integral field spectrograph is the detection of Lyman-alpha emitters at high redshifts. The on-going and planned deep fields observations will allow for one large sample of these sources. An efficient tool to perform blind detection of faint emitters in MUSE datacubes is a prerequisite of such an endeavor. Aims. Several line detection algorithms exist but their performance during the deepest MUSE exposures is hard to quantify, in particular with respect to their actual false detection rate, or purity. The aim of this work is to design and validate an algorithm that efficiently detects faint spatial-spectral emission signatures, while allowing for a stable false detection rate over the data cube and providing in the same time an automated and reliable estimation of the purity. Methods. The algorithm implements (i) a nuisance removal part based on a continuum subtraction combining a discrete cosine transform and an iterative principal component analysis, (ii) a detection part based on the local maxima of generalized likelihood ratio test statistics obtained for a set of spatial-spectral profiles of emission line emitters and (iii) a purity estimation part, where the proportion of true emission lines is estimated from the data itself: the distribution of the local maxima in the “noise only” configuration is estimated from that of the local minima. Results. Results on simulated data cubes providing ground truth show that the method reaches its aims in terms of purity and completeness. When applied to the deep 30 h exposure MUSE datacube in the Hubble Ultra Deep Field, the algorithms allows for the confirmed detection of 133 intermediate redshifts galaxies and 248 Lyα emitters, including 86 sources with no Hubble Space Telescope counterpart. Conclusions. The algorithm fulfills its aims in terms of detection power and reliability. It is consequently implemented as a Python package whose code and documentation are available on GitHub and readthedocs.


2002 ◽  
Vol 56 (8) ◽  
pp. 1082-1093 ◽  
Author(s):  
Lin Zhang ◽  
Gary W. Small

Pattern recognition methods are developed for the automated interpretation of passive multispectral imaging data collected from an airborne platform. Through the use of an infrared line scanner equipped with 14 spectral bandpass filters, passive infrared images are collected of an ammonia plant within a nitrogen fertilizer facility. Piecewise linear discriminant analysis is used to implement an automated algorithm for the detection of scene pixels that correspond to chemical vapor signatures. A separate classifier is used to detect the presence of hot carbon dioxide (CO2) within the images. In the assembly of training and prediction data for the development of both classifiers, the K-means clustering algorithm is used together with knowledge of the site to assign pixels to the plume/nonplume and CO2/non-CO2 categories. The effects of temperature variation within the imaged scene are removed from the data through the use of an algorithm for separating the contributions of temperature and emissivity to the Planck equation. Averaged across four data runs containing a total of 3.5 million pixels, the resulting discriminants are observed to detect approximately 91% of the plume pixels while achieving a false detection rate of less than 0.01%. The corresponding performance criteria for the CO2 classifier are a successful detection of approximately 94% of the pixels with a CO2 signature and a false detection rate of less than 0.7%. The robustness of the CO2 classifier is further enhanced through the adoption of a probability-based classification rule.


Sign in / Sign up

Export Citation Format

Share Document