scholarly journals Automated detection of artefacts in neonatal EEG with residual neural networks

2021 ◽  
Author(s):  
Lachlan Webb ◽  
Minna Kauppila ◽  
James A Roberts ◽  
Sampsa Vanhatalo ◽  
Nathan Stevenson

Background and Objective: To develop a computational algorithm that detects and identifies different artefact types in neonatal electroencephalography (EEG) signals. Methods: As part of a larger algorithm, we trained a Residual Deep Neural Network on expert human annotations of EEG recordings from 79 term infants recorded in a neonatal intensive care unit (112 h of 18-channel recording). The network was trained using 10 fold cross validation in Matlab. Artefact types included: device interference, EMG, movement, electrode pop, and non-cortical biological rhythms. Performance was assessed by prediction statistics and further validated on a separate independent dataset of 13 term infants (143 h of 3-channel recording). EEG pre-processing steps, and other post-processing steps such as averaging probability over a temporal window, were also included in the algorithm. Results: The Residual Deep Neural Network showed high accuracy (95%) when distinguishing periods of clean, artefact-free EEG from any kind of artefact, with a median accuracy for individual patient of 91% (IQR: 81%-96%). The accuracy in identifying the five different types of artefacts ranged from 57%-92%, with electrode pop being the hardest to detect and EMG being the easiest. This reflected the proportion of artefact available in the training dataset. Misclassification as clean was low for each artefact type, ranging from 1%-11%. The detection accuracy was lower on the validation set (87%). We used the algorithm to show that EEG channels located near the vertex were the least susceptible to artefact. Conclusion: Artefacts can be accurately and reliably identified in the neonatal EEG using a deep learning algorithm. Artefact detection algorithms can provide continuous bedside quality assessment and support EEG review by clinicians or analysis algorithms.

2020 ◽  
Vol 17 (8) ◽  
pp. 3328-3332
Author(s):  
S. Gowri ◽  
U. Srija ◽  
P. A. Shirley Divya ◽  
J. Jabez ◽  
J. S. Vimali

Classifying and predicting the Mangrove species is one of the most important applications in our ecosystem. Mangroves are the most endangered species that contributes in playing a greater role in our ecosystem. It mainly prevents the calamities like soil erosion, Tsunami, storms, wind turbulence, etc. These Mangroves has to be afforested and conserved in order to maintain a healthy ecosystem. To attain this the study of mangrove is to be done first. To classify the mangroves in its habitat, we use an algorithm from Deep Neural Network.


2014 ◽  
Vol 641-642 ◽  
pp. 1287-1290
Author(s):  
Lan Zhang ◽  
Yu Feng Nie ◽  
Zhen Hai Wang

Deep neural network as a part of deep learning algorithm is a state-of-the-art approach to find higher level representations of input data which has been introduced to many practical and challenging learning problems successfully. The primary goal of deep learning is to use large data to help solving a given task on machine learning. We propose an methodology for image de-noising project defined by this model and conduct training a large image database to get the experimental output. The result shows the robustness and efficient our our algorithm.


Agriculture ◽  
2021 ◽  
Vol 11 (12) ◽  
pp. 1265
Author(s):  
Mohd Najib Ahmad ◽  
Abdul Rashid Mohamed Shariff ◽  
Ishak Aris ◽  
Izhal Abdul Halin

The bagworm is a vicious leaf eating insect pest that threatens the oil palm plantations in Malaysia. The economic impact from defoliation of approximately 10% to 13% due to bagworm attack might cause about 33% to 40% yield loss over 2 years. Due to this, monitoring and detecting of bagworm populations in oil palm plantations is required as the preliminary steps to ensure proper planning of control actions in these areas. Hence, the development of an image processing algorithm for detection and counting of Metisa plana Walker, a species of Malaysia’s local bagworm, using image segmentation has been researched and completed. The color and shape features from the segmented images for real time object detection showed an average detection accuracy of 40% and 34%, at 30 cm and 50 cm camera distance, respectively. After some improvements on training dataset and marking detected bagworm with bounding box, a deep learning algorithm with Faster Regional Convolutional Neural Network (Faster R-CNN) algorithm was applied leading to the percentage of the detection accuracy increased up to 100% at a camera distance of 30 cm in close conditions. The proposed solution is also designed to distinguish between the living and dead larvae of the bagworms using motion detection which resulted in approximately 73–100% accuracy at a camera distance of 30 cm in the close conditions. Through false color analysis, distinct differences in the pixel count based on the slope was observed for dead and live pupae at 630 nm and 940 nm, with the slopes recorded at 0.38 and 0.28, respectively. The higher pixel count and slope correlated with the dead pupae while the lower pixel count and slope, represented the living pupae.


2019 ◽  
Vol 28 (12) ◽  
pp. 1950153 ◽  
Author(s):  
Jing Tan ◽  
Chong-Bin Chen

We use the deep learning algorithm to learn the Reissner–Nordström (RN) black hole metric by building a deep neural network. Plenty of data are determined in boundary of AdS and we propagate them to the black hole horizon through AdS metric and equation of motion (e.o.m). We label these data according to the values near the horizon, and together with initial data they constitute a data set. Then we construct corresponding deep neural network and train it with the data set to obtain the Reissner–Nordström (RN) black hole metric. Finally, we discuss the effects of learning rate, batch-size and initialization on the training process.


Author(s):  
Weston Upchurch ◽  
Alex Deakyne ◽  
David A. Ramirez ◽  
Paul A. Iaizzo

Abstract Acute compartment syndrome is a serious condition that requires urgent surgical treatment. While the current emergency treatment is straightforward — relieve intra-compartmental pressure via fasciotomy — the diagnosis is often a difficult one. A deep neural network is presented here that has been trained to detect whether isolated muscle bundles were exposed to hypoxic conditions and became ischemic.


Circulation ◽  
2020 ◽  
Vol 142 (Suppl_4) ◽  
Author(s):  
Shirin Hajeb Mohammadalipour ◽  
Alicia Cascella ◽  
Matt Valentine ◽  
K.H. Chon

The ability of an automatic external defibrillator (AED) to make a reliable shock decision during cardio pulmonary resuscitation (CPR) would improve the survival rate of patients with out-of-hospital cardiac arrest. Since chest compressions induce motion artifacts in the electrocardiogram (ECG), current AEDs instruct the user to stop CPR while an automated rhythm analysis is performed. It has been shown that minimizing interruptions in CPR increases the chance of survival. While deep learning approaches have been used successfully for arrhythmia classification, their performance has not been evaluated for creating an AED shock advisory system that can coexist with CPR. To this end, the objective of this study was to apply a deep-learning algorithm using convolutional layers and residual networks to classify shockable versus non-shockable rhythms in the presence and absence of CPR artifact using only the ECG data. The feasibility of the deep learning method was validated using 8-sec segments of ECG with and without CPR. Two separate databases were used: 1) 40 subjects’ data without CPR from Physionet with 1131 shockable and 2741 non-shockable classified recordings, and 2) CPR artifacts that were acquired from a commercial AED during asystole delivered by 43 different resuscitators. For each 8-second ECG segment, randomly chosen CPR data from 43 different types were added to it so that 5 non-shockable and 10 shockable CPR-contaminated ECG segments were created. We used 30 subjects’ and the remaining 10 for training and test datasets, respectively, for the database 1). For the database 2), we used 33 and 10 subjects’ data for training and testing, respectively. Using our deep neural network model, the sensitivity and specificity of the shock versus no-shock decision for both datasets using the four-fold cross-validation were found to be 95.21% and 86.03%, respectively. For shockable versus non-shockable classification of ECG without CPR artifact, the sensitivity was 99.04% and the specificity was 95.2%. A sensitivity of 94.21% and a specificity of 86.14% were obtained for ECG with CPR artifact. These results meet the AHA sensitivity requirement (>90%).


2021 ◽  
Vol 13 (9) ◽  
pp. 1779
Author(s):  
Xiaoyan Yin ◽  
Zhiqun Hu ◽  
Jiafeng Zheng ◽  
Boyong Li ◽  
Yuanyuan Zuo

Radar beam blockage is an important error source that affects the quality of weather radar data. An echo-filling network (EFnet) is proposed based on a deep learning algorithm to correct the echo intensity under the occlusion area in the Nanjing S-band new-generation weather radar (CINRAD/SA). The training dataset is constructed by the labels, which are the echo intensity at the 0.5° elevation in the unblocked area, and by the input features, which are the intensity in the cube including multiple elevations and gates corresponding to the location of bottom labels. Two loss functions are applied to compile the network: one is the common mean square error (MSE), and the other is a self-defined loss function that increases the weight of strong echoes. Considering that the radar beam broadens with distance and height, the 0.5° elevation scan is divided into six range bands every 25 km to train different models. The models are evaluated by three indicators: explained variance (EVar), mean absolute error (MAE), and correlation coefficient (CC). Two cases are demonstrated to compare the effect of the echo-filling model by different loss functions. The results suggest that EFnet can effectively correct the echo reflectivity and improve the data quality in the occlusion area, and there are better results for strong echoes when the self-defined loss function is used.


Cancers ◽  
2021 ◽  
Vol 13 (4) ◽  
pp. 652 ◽  
Author(s):  
Carlo Augusto Mallio ◽  
Andrea Napolitano ◽  
Gennaro Castiello ◽  
Francesco Maria Giordano ◽  
Pasquale D'Alessio ◽  
...  

Background: Coronavirus disease 2019 (COVID-19) pneumonia and immune checkpoint inhibitor (ICI) therapy-related pneumonitis share common features. The aim of this study was to determine on chest computed tomography (CT) images whether a deep convolutional neural network algorithm is able to solve the challenge of differential diagnosis between COVID-19 pneumonia and ICI therapy-related pneumonitis. Methods: We enrolled three groups: a pneumonia-free group (n = 30), a COVID-19 group (n = 34), and a group of patients with ICI therapy-related pneumonitis (n = 21). Computed tomography images were analyzed with an artificial intelligence (AI) algorithm based on a deep convolutional neural network structure. Statistical analysis included the Mann–Whitney U test (significance threshold at p < 0.05) and the receiver operating characteristic curve (ROC curve). Results: The algorithm showed low specificity in distinguishing COVID-19 from ICI therapy-related pneumonitis (sensitivity 97.1%, specificity 14.3%, area under the curve (AUC) = 0.62). ICI therapy-related pneumonitis was identified by the AI when compared to pneumonia-free controls (sensitivity = 85.7%, specificity 100%, AUC = 0.97). Conclusions: The deep learning algorithm is not able to distinguish between COVID-19 pneumonia and ICI therapy-related pneumonitis. Awareness must be increased among clinicians about imaging similarities between COVID-19 and ICI therapy-related pneumonitis. ICI therapy-related pneumonitis can be applied as a challenge population for cross-validation to test the robustness of AI models used to analyze interstitial pneumonias of variable etiology.


2021 ◽  
Vol 10 (1) ◽  
pp. 21
Author(s):  
Omar Nassef ◽  
Toktam Mahmoodi ◽  
Foivos Michelinakis ◽  
Kashif Mahmood ◽  
Ahmed Elmokashfi

This paper presents a data driven framework for performance optimisation of Narrow-Band IoT user equipment. The proposed framework is an edge micro-service that suggests one-time configurations to user equipment communicating with a base station. Suggested configurations are delivered from a Configuration Advocate, to improve energy consumption, delay, throughput or a combination of those metrics, depending on the user-end device and the application. Reinforcement learning utilising gradient descent and genetic algorithm is adopted synchronously with machine and deep learning algorithms to predict the environmental states and suggest an optimal configuration. The results highlight the adaptability of the Deep Neural Network in the prediction of intermediary environmental states, additionally the results present superior performance of the genetic reinforcement learning algorithm regarding its performance optimisation.


Sign in / Sign up

Export Citation Format

Share Document