scholarly journals Inter-database validation of a deep learning approach for automatic sleep scoring

PLoS ONE ◽  
2021 ◽  
Vol 16 (8) ◽  
pp. e0256111
Author(s):  
Diego Alvarez-Estevez ◽  
Roselyne M. Rijsman

Study objectives Development of inter-database generalizable sleep staging algorithms represents a challenge due to increased data variability across different datasets. Sharing data between different centers is also a problem due to potential restrictions due to patient privacy protection. In this work, we describe a new deep learning approach for automatic sleep staging, and address its generalization capabilities on a wide range of public sleep staging databases. We also examine the suitability of a novel approach that uses an ensemble of individual local models and evaluate its impact on the resulting inter-database generalization performance. Methods A general deep learning network architecture for automatic sleep staging is presented. Different preprocessing and architectural variant options are tested. The resulting prediction capabilities are evaluated and compared on a heterogeneous collection of six public sleep staging datasets. Validation is carried out in the context of independent local and external dataset generalization scenarios. Results Best results were achieved using the CNN_LSTM_5 neural network variant. Average prediction capabilities on independent local testing sets achieved 0.80 kappa score. When individual local models predict data from external datasets, average kappa score decreases to 0.54. Using the proposed ensemble-based approach, average kappa performance on the external dataset prediction scenario increases to 0.62. To our knowledge this is the largest study by the number of datasets so far on validating the generalization capabilities of an automatic sleep staging algorithm using external databases. Conclusions Validation results show good general performance of our method, as compared with the expected levels of human agreement, as well as to state-of-the-art automatic sleep staging methods. The proposed ensemble-based approach enables flexible and scalable design, allowing dynamic integration of local models into the final ensemble, preserving data locality, and increasing generalization capabilities of the resulting system at the same time.

Author(s):  
S. Su ◽  
T. Nawata ◽  
T. Fuse

Abstract. Automatic building change detection has become a topical issue owing to its wide range of applications, such as updating building maps. However, accurate building change detection remains challenging, particularly in urban areas. Thus far, there has been limited research on the use of the outdated building map (the building map before the update, referred to herein as the old-map) to increase the accuracy of building change detection. This paper presents a novel deep-learning-based method for building change detection using bitemporal aerial images containing RGB bands, bitemporal digital surface models (DSMs), and an old-map. The aerial images have two types of spatial resolutions, 12.5 cm or 16 cm, and the cell size of the DSMs is 50 cm × 50 cm. The bitemporal aerial images, the height variations calculated using the differences between the bitemporal DSMs, and the old-map were fed into a network architecture to build an automatic building change detection model. The performance of the model was quantitatively and qualitatively evaluated for an urban area that covered approximately 10 km2 and contained over 21,000 buildings. The results indicate that it can detect the building changes with optimum accuracy as compared to other methods that use inputs such as i) bitemporal aerial images only, ii) bitemporal aerial images and bitemporal DSMs, and iii) bitemporal aerial images and an old-map. The proposed method achieved recall rates of 89.3%, 88.8%, and 99.5% for new, demolished, and other buildings, respectively. The results also demonstrate that the old-map is an effective data source for increasing building change detection accuracy.


Author(s):  
Yogita Hande ◽  
Akkalashmi Muddana

Presently, the advances of the internet towards a wide-spread growth and the static nature of traditional networks has limited capacity to cope with organizational business needs. The new network architecture software defined networking (SDN) appeared to address these challenges and provides distinctive features. However, these programmable and centralized approaches of SDN face new security challenges which demand innovative security mechanisms like intrusion detection systems (IDS's). The IDS of SDN are designed currently with a machine learning approach; however, a deep learning approach is also being explored to achieve better efficiency and accuracy. In this article, an overview of the SDN with its security concern and IDS as a security solution is explained. A survey of existing security solutions designed to secure the SDN, and a comparative study of various IDS approaches based on a deep learning model and machine learning methods are discussed in the article. Finally, we describe future directions for SDN security.


Nanoscale ◽  
2019 ◽  
Vol 11 (44) ◽  
pp. 21266-21274 ◽  
Author(s):  
Omid Hemmatyar ◽  
Sajjad Abdollahramezani ◽  
Yashar Kiarashinejad ◽  
Mohammadreza Zandehshahvar ◽  
Ali Adibi

Here, for the first time to our knowledge, a Fano resonance metasurface made of HfO2 is experimentally demonstrated to generate a wide range of colors. We use a novel deep-learning technique to design and optimize the metasurface.


Author(s):  
Yogita Hande ◽  
Akkalashmi Muddana

Presently, the advances of the internet towards a wide-spread growth and the static nature of traditional networks has limited capacity to cope with organizational business needs. The new network architecture software defined networking (SDN) appeared to address these challenges and provides distinctive features. However, these programmable and centralized approaches of SDN face new security challenges which demand innovative security mechanisms like intrusion detection systems (IDS's). The IDS of SDN are designed currently with a machine learning approach; however, a deep learning approach is also being explored to achieve better efficiency and accuracy. In this article, an overview of the SDN with its security concern and IDS as a security solution is explained. A survey of existing security solutions designed to secure the SDN, and a comparative study of various IDS approaches based on a deep learning model and machine learning methods are discussed in the article. Finally, we describe future directions for SDN security.


SLEEP ◽  
2020 ◽  
Vol 43 (Supplement_1) ◽  
pp. A171-A171
Author(s):  
S Æ Jónsson ◽  
E Gunnlaugsson ◽  
E Finssonn ◽  
D L Loftsdóttir ◽  
G H Ólafsdóttir ◽  
...  

Abstract Introduction Sleep stage classifications are of central importance when diagnosing various sleep-related diseases. Performing a full PSG recording can be time-consuming and expensive, and often requires an overnight stay at a sleep clinic. Furthermore, the manual sleep staging process is tedious and subject to scorer variability. Here we present an end-to-end deep learning approach to robustly classify sleep stages from Self Applied Somnography (SAS) studies with frontal EEG and EOG signals. This setup allows patients to self-administer EEG and EOG leads in a home sleep study, which reduces cost and is more convenient for the patients. However, self-administration of the leads increases the risk of loose electrodes, which the algorithm must be robust to. The model structure was inspired by ResNet (He, Zhang, Ren, Sun, 2015), which has been highly successful in image recognition tasks. The ResTNet is comprised of the characteristic Residual blocks with an added Temporal component. Methods The ResTNet classifies sleep stages from the raw signals using convolutional neural network (CNN) layers, which avoids manual feature extraction, residual blocks, and a gated recurrent unit (GRU). This significantly reduces sleep stage prediction time and allows the model to learn more complex relations as the size of the training data increases. The model was developed and validated on over 400 manually scored sleep studies using the novel SAS setup. In developing the model, we used data augmentation techniques to simulate loose electrodes and distorted signals to increase model robustness with regards to missing signals and low quality data. Results The study shows that applying the robust ResTNet model to SAS studies gives accuracy > 0.80 and F1-score > 0.80. It outperforms our previous model which used hand-crafted features and achieves similar performance to a human scorer. Conclusion The ResTNet is fast, gives accurate predictions, and is robust to loose electrodes. The end-to-end model furthermore promises better performance with more data. Combined with the simplicity of the SAS setup, it is an attractive option for large-scale sleep studies. Support This work was supported by the Icelandic Centre for Research RANNÍS (175256-0611).


2021 ◽  
Vol 4 (1) ◽  
Author(s):  
Mathias Perslev ◽  
Sune Darkner ◽  
Lykke Kempfner ◽  
Miki Nikolic ◽  
Poul Jørgen Jennum ◽  
...  

AbstractSleep disorders affect a large portion of the global population and are strong predictors of morbidity and all-cause mortality. Sleep staging segments a period of sleep into a sequence of phases providing the basis for most clinical decisions in sleep medicine. Manual sleep staging is difficult and time-consuming as experts must evaluate hours of polysomnography (PSG) recordings with electroencephalography (EEG) and electrooculography (EOG) data for each patient. Here, we present U-Sleep, a publicly available, ready-to-use deep-learning-based system for automated sleep staging (sleep.ai.ku.dk). U-Sleep is a fully convolutional neural network, which was trained and evaluated on PSG recordings from 15,660 participants of 16 clinical studies. It provides accurate segmentations across a wide range of patient cohorts and PSG protocols not considered when building the system. U-Sleep works for arbitrary combinations of typical EEG and EOG channels, and its special deep learning architecture can label sleep stages at shorter intervals than the typical 30 s periods used during training. We show that these labels can provide additional diagnostic information and lead to new ways of analyzing sleep. U-Sleep performs on par with state-of-the-art automatic sleep staging systems on multiple clinical datasets, even if the other systems were built specifically for the particular data. A comparison with consensus-scores from a previously unseen clinic shows that U-Sleep performs as accurately as the best of the human experts. U-Sleep can support the sleep staging workflow of medical experts, which decreases healthcare costs, and can provide highly accurate segmentations when human expertize is lacking.


Energies ◽  
2021 ◽  
Vol 14 (21) ◽  
pp. 7378
Author(s):  
Pedro M. R. Bento ◽  
Jose A. N. Pombo ◽  
Maria R. A. Calado ◽  
Silvio J. P. S. Mariano

Short-Term Load Forecasting is critical for reliable power system operation, and the search for enhanced methodologies has been a constant field of investigation, particularly in an increasingly competitive environment where the market operator and its participants need to better inform their decisions. Hence, it is important to continue advancing in terms of forecasting accuracy and consistency. This paper presents a new deep learning-based ensemble methodology for 24 h ahead load forecasting, where an automatic framework is proposed to select the best Box-Jenkins models (ARIMA Forecasters), from a wide-range of combinations. The method is distinct in its parameters but more importantly in considering different batches of historical (training) data, thus benefiting from prediction models focused on recent and longer load trends. Afterwards, these accurate predictions, mainly the linear components of the load time-series, are fed to the ensemble Deep Forward Neural Network. This flexible type of network architecture not only functions as a combiner but also receives additional historical and auxiliary data to further its generalization capabilities. Numerical testing using New England market data validated the proposed ensemble approach with diverse base forecasters, achieving promising results in comparison with other state-of-the-art methods.


Author(s):  
Nicola K. Dinsdale ◽  
Mark Jenkinson ◽  
Ana I. L. Namburete

AbstractIncreasingly large MRI neuroimaging datasets are becoming available, including many highly multi-site multi-scanner datasets. Combining the data from the different scanners is vital for increased statistical power; however, this leads to an increase in variance due to nonbiological factors such as the differences in acquisition protocols and hardware, which can mask signals of interest.We propose a deep learning based training scheme, inspired by domain adaptation techniques, which uses an iterative update approach to aim to create scanner-invariant features while simultaneously maintaining performance on the main task of interest, thus reducing the influence of scanner on network predictions. We demonstrate the framework for regression, classification and segmentation tasks with two different network architectures.We show that not only can the framework harmonise many-site datasets but it can also adapt to many data scenarios, including biased datasets and limited training labels. Finally, we show that the framework can be extended for the removal of other known confounds in addition to scanner. The overall framework is therefore flexible and should be applicable to a wide range of neuroimaging studies.1.HighlightsWe demonstrate a flexible deep-learning-based harmonisation frameworkApplied to age prediction and segmentation tasks in a range of datasetsScanner information is removed, maintaining performance and improving generalisabilityThe framework can be used with any feedforward network architectureIt successfully removes additional confounds and works with varied distributions


2019 ◽  
Vol 11 (5) ◽  
pp. 523 ◽  
Author(s):  
Charlotte Pelletier ◽  
Geoffrey Webb ◽  
François Petitjean

Latest remote sensing sensors are capable of acquiring high spatial and spectral Satellite Image Time Series (SITS) of the world. These image series are a key component of classification systems that aim at obtaining up-to-date and accurate land cover maps of the Earth’s surfaces. More specifically, current SITS combine high temporal, spectral and spatial resolutions, which makes it possible to closely monitor vegetation dynamics. Although traditional classification algorithms, such as Random Forest (RF), have been successfully applied to create land cover maps from SITS, these algorithms do not make the most of the temporal domain. This paper proposes a comprehensive study of Temporal Convolutional Neural Networks (TempCNNs), a deep learning approach which applies convolutions in the temporal dimension in order to automatically learn temporal (and spectral) features. The goal of this paper is to quantitatively and qualitatively evaluate the contribution of TempCNNs for SITS classification, as compared to RF and Recurrent Neural Networks (RNNs) —a standard deep learning approach that is particularly suited to temporal data. We carry out experiments on Formosat-2 scene with 46 images and one million labelled time series. The experimental results show that TempCNNs are more accurate than the current state of the art for SITS classification. We provide some general guidelines on the network architecture, common regularization mechanisms, and hyper-parameter values such as batch size; we also draw out some differences with standard results in computer vision (e.g., about pooling layers). Finally, we assess the visual quality of the land cover maps produced by TempCNNs.


Author(s):  
Ismail El Bazi ◽  
Nabil Laachfoubi

Most of the Arabic Named Entity Recognition (NER) systems depend massively on external resources and handmade feature engineering to achieve state-of-the-art results. To overcome such limitations, we proposed, in this paper, to use deep learning approach to tackle the Arabic NER task. We introduced a neural network architecture based on bidirectional Long Short-Term Memory (LSTM) and Conditional Random Fields (CRF) and experimented with various commonly used hyperparameters to assess their effect on the overall performance of our system. Our model gets two sources of information about words as input: pre-trained word embeddings and character-based representations and eliminated the need for any task-specific knowledge or feature engineering. We obtained state-of-the-art result on the standard ANERcorp corpus with an F1 score of 90.6%.


Sign in / Sign up

Export Citation Format

Share Document