A deep wavelet sparse autoencoder method for online and automatic electrooculographical artifact removal

2020 ◽  
Vol 32 (24) ◽  
pp. 18255-18270
Author(s):  
Hoang-Anh The Nguyen ◽  
Thanh Ha Le ◽  
The Duy Bui
2020 ◽  
Vol 132 (6) ◽  
pp. 1952-1960 ◽  
Author(s):  
Seung-Bo Lee ◽  
Hakseung Kim ◽  
Young-Tak Kim ◽  
Frederick A. Zeiler ◽  
Peter Smielewski ◽  
...  

OBJECTIVEMonitoring intracranial and arterial blood pressure (ICP and ABP, respectively) provides crucial information regarding the neurological status of patients with traumatic brain injury (TBI). However, these signals are often heavily affected by artifacts, which may significantly reduce the reliability of the clinical determinations derived from the signals. The goal of this work was to eliminate signal artifacts from continuous ICP and ABP monitoring via deep learning techniques and to assess the changes in the prognostic capacities of clinical parameters after artifact elimination.METHODSThe first 24 hours of monitoring ICP and ABP in a total of 309 patients with TBI was retrospectively analyzed. An artifact elimination model for ICP and ABP was constructed via a stacked convolutional autoencoder (SCAE) and convolutional neural network (CNN) with 10-fold cross-validation tests. The prevalence and prognostic capacity of ICP- and ABP-related clinical events were compared before and after artifact elimination.RESULTSThe proposed SCAE-CNN model exhibited reliable accuracy in eliminating ABP and ICP artifacts (net prediction rates of 97% and 94%, respectively). The prevalence of ICP- and ABP-related clinical events (i.e., systemic hypotension, intracranial hypertension, cerebral hypoperfusion, and poor cerebrovascular reactivity) all decreased significantly after artifact removal.CONCLUSIONSThe SCAE-CNN model can be reliably used to eliminate artifacts, which significantly improves the reliability and efficacy of ICP- and ABP-derived clinical parameters for prognostic determinations after TBI.


2021 ◽  
Vol 34 (1) ◽  
Author(s):  
Zhe Yang ◽  
Dejan Gjorgjevikj ◽  
Jianyu Long ◽  
Yanyang Zi ◽  
Shaohui Zhang ◽  
...  

AbstractSupervised fault diagnosis typically assumes that all the types of machinery failures are known. However, in practice unknown types of defect, i.e., novelties, may occur, whose detection is a challenging task. In this paper, a novel fault diagnostic method is developed for both diagnostics and detection of novelties. To this end, a sparse autoencoder-based multi-head Deep Neural Network (DNN) is presented to jointly learn a shared encoding representation for both unsupervised reconstruction and supervised classification of the monitoring data. The detection of novelties is based on the reconstruction error. Moreover, the computational burden is reduced by directly training the multi-head DNN with rectified linear unit activation function, instead of performing the pre-training and fine-tuning phases required for classical DNNs. The addressed method is applied to a benchmark bearing case study and to experimental data acquired from a delta 3D printer. The results show that its performance is satisfactory both in detection of novelties and fault diagnosis, outperforming other state-of-the-art methods. This research proposes a novel fault diagnostics method which can not only diagnose the known type of defect, but also detect unknown types of defects.


Animals ◽  
2021 ◽  
Vol 11 (6) ◽  
pp. 1485
Author(s):  
Kaidong Lei ◽  
Chao Zong ◽  
Xiaodong Du ◽  
Guanghui Teng ◽  
Feiqi Feng

This study proposes a method and device for the intelligent mobile monitoring of oestrus on a sow farm, applied in the field of sow production. A bionic boar model that imitates the sounds, smells, and touch of real boars was built to detect the oestrus of sows after weaning. Machine vision technology was used to identify the interactive behaviour between empty sows and bionic boars and to establish deep belief network (DBN), sparse autoencoder (SAE), and support vector machine (SVM) models, and the resulting recognition accuracy rates were 96.12%, 98.25%, and 90.00%, respectively. The interaction times and frequencies between the sow and the bionic boar and the static behaviours of both ears during heat were further analysed. The results show that there is a strong correlation between the duration of contact between the oestrus sow and the bionic boar and the static behaviours of both ears. The average contact duration between the sows in oestrus and the bionic boars was 29.7 s/3 min, and the average duration in which the ears of the oestrus sows remained static was 41.3 s/3 min. The interactions between the sow and the bionic boar were used as the basis for judging the sow’s oestrus states. In contrast with the methods of other studies, the proposed innovative design for recyclable bionic boars can be used to check emotions, and machine vision technology can be used to quickly identify oestrus behaviours. This approach can more accurately obtain the oestrus duration of a sow and provide a scientific reference for a sow’s conception time.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Muhammad Aqeel Aslam ◽  
Cuili Xue ◽  
Yunsheng Chen ◽  
Amin Zhang ◽  
Manhua Liu ◽  
...  

AbstractDeep learning is an emerging tool, which is regularly used for disease diagnosis in the medical field. A new research direction has been developed for the detection of early-stage gastric cancer. The computer-aided diagnosis (CAD) systems reduce the mortality rate due to their effectiveness. In this study, we proposed a new method for feature extraction using a stacked sparse autoencoder to extract the discriminative features from the unlabeled data of breath samples. A Softmax classifier was then integrated to the proposed method of feature extraction, to classify gastric cancer from the breath samples. Precisely, we identified fifty peaks in each spectrum to distinguish the EGC, AGC, and healthy persons. This CAD system reduces the distance between the input and output by learning the features and preserve the structure of the input data set of breath samples. The features were extracted from the unlabeled data of the breath samples. After the completion of unsupervised training, autoencoders with Softmax classifier were cascaded to develop a deep stacked sparse autoencoder neural network. In last, fine-tuning of the developed neural network was carried out with labeled training data to make the model more reliable and repeatable. The proposed deep stacked sparse autoencoder neural network architecture exhibits excellent results, with an overall accuracy of 98.7% for advanced gastric cancer classification and 97.3% for early gastric cancer detection using breath analysis. Moreover, the developed model produces an excellent result for recall, precision, and f score value, making it suitable for clinical application.


Sign in / Sign up

Export Citation Format

Share Document