Abstract 268: A Convolutional Neural Network for Real-Time Discrimination of Shock-Indicated Rhythms During CPR

Circulation ◽  
2018 ◽  
Vol 138 (Suppl_2) ◽  
Author(s):  
Tetsuo Hatanaka ◽  
Hiroshi Kaneko ◽  
Aki Nagase ◽  
Seishiro Marukawa

Introduction: An interruption of chest compressions during CPR adversely affects patient outcome. Currently, however, periodical interruptions are unavoidable to assess the ECG rhythms and to give shocks for defibrillation if indicated. Evidence suggests a 5-second interruption immediately before shocks may translate into ~15% reduction of the chance of survival. The objective of this study was to build, train and validate a convolutional neural network (artificial intelligence) for detecting shock-indicated rhythms out of ECG signals corrupted with chest compression artifacts during CPR. Methods: Our convolutional neural network consisted of 7 convolutional layers, 3 pooling layers and 3 fully-connected layers for binary classification (shock-indicated vs non-shock-indicated). The input data set was a spectrogram consisting of 56 frequency-bins by 80 time-segments transformed from a 12.16-seconds ECG signal. From AEDs used for 236 patients with out-of-hospital cardiac arrest, 1,223 annotated ECG strips were extracted. Ventricular fibrillation and wide-QRS ventricular tachycardia with HR>180 beats/min were annotated as shock-indicated, and the others as non-shock-indicated. The total length of the strips was 8:49:57 (hr:min:sec) and 8:02:07 respectively for shock-indicated and non-shock-indicated rhythms. Those strips were converted into 465,102 spectrograms allowing partial overlaps and were fed into the neural network for training. The validation data set was obtained from a separate group of 225 patients, from which annotated ECG strips (total duration of 62:11:28) were extracted, yielding 43,800 spectrograms. Results: After the training, both the sensitivity and specificity of detecting shock-indicated rhythms over the training data set were 99.7% - 100% (varying with training instances). The sensitivity and specificity over the validation data set were 99.3% - 99.7% and 99.3% - 99.5%, respectively. Conclusions: The convolutional neural network has accurately and continuously evaluated the ECG rhythms during CPR, potentially obviating the need for rhythm checks for defibrillation during CPR.

Sensors ◽  
2019 ◽  
Vol 19 (19) ◽  
pp. 4161 ◽  
Author(s):  
Hang ◽  
Zhang ◽  
Chen ◽  
Zhang ◽  
Wang

Plant leaf diseases are closely related to people's daily life. Due to the wide variety of diseases, it is not only time-consuming and labor-intensive to identify and classify diseases by artificial eyes, but also easy to be misidentified with having a high error rate. Therefore, we proposed a deep learning-based method to identify and classify plant leaf diseases. The proposed method can take the advantages of the neural network to extract the characteristics of diseased parts, and thus to classify target disease areas. To address the issues of long training convergence time and too-large model parameters, the traditional convolutional neural network was improved by combining a structure of inception module, a squeeze-and-excitation (SE) module and a global pooling layer to identify diseases. Through the Inception structure, the feature data of the convolutional layer were fused in multi-scales to improve the accuracy on the leaf disease dataset. Finally, the global average pooling layer was used instead of the fully connected layer to reduce the number of model parameters. Compared with some traditional convolutional neural networks, our model yielded better performance and achieved an accuracy of 91.7% on the test data set. At the same time, the number of model parameters and training time have also been greatly reduced. The experimental classification on plant leaf diseases indicated that our method is feasible and effective.


2021 ◽  
Author(s):  
Yash Chauhan ◽  
Prateek Singh

Coins recognition systems have humungous applications from vending and slot machines to banking and management firms which directly translate to a high volume of research regarding the development of methods for such classification. In recent years, academic research has shifted towards a computer vision approach for sorting coins due to the advancement in the field of deep learning. However, most of the documented work utilizes what is known as ‘Transfer Learning’ in which we reuse a pre-trained model of a fixed architecture as a starting point for our training. While such an approach saves us a lot of time and effort, the generic nature of the pre-trained model can often become a bottleneck for performance on a specialized problem such as coin classification. This study develops a convolutional neural network (CNN) model from scratch and tests it against a widely-used general-purpose architecture known as Googlenet. We have shown in this study by comparing the performance of our model with that of Googlenet (documented in various previous studies) that a more straightforward and specialized architecture is more optimal than a more complex general architecture for the coin classification problem. The model developed in this study is trained and tested on 720 and 180 images of Indian coins of different denominations, respectively. The final accuracy gained by the model is 91.62% on the training data, while the accuracy is 90.55% on the validation data.


2019 ◽  
Vol 2 (1) ◽  
Author(s):  
Jeffrey Micher

We present a method for building a morphological generator from the output of an existing analyzer for Inuktitut, in the absence of a two-way finite state transducer which would normally provide this functionality. We make use of a sequence to sequence neural network which “translates” underlying Inuktitut morpheme sequences into surface character sequences. The neural network uses only the previous and the following morphemes as context. We report a morpheme accuracy of approximately 86%. We are able to increase this accuracy slightly by passing deep morphemes directly to output for unknown morphemes. We do not see significant improvement when increasing training data set size, and postulate possible causes for this.


2021 ◽  
Vol 2021 ◽  
pp. 1-8
Author(s):  
Zhaojie Wang ◽  
Qingzhe Lv ◽  
Zhaobo Lu ◽  
Yilei Wang ◽  
Shengjie Yue

Incentive mechanism is the key to the success of the Bitcoin system as a permissionless blockchain. It encourages participants to contribute their computing resources to ensure the correctness and consistency of user transaction records. Selfish mining attacks, however, prove that Bitcoin’s incentive mechanism is not incentive-compatible, which is contrary to traditional views. Selfish mining attacks may cause the loss of mining power, especially those of honest participants, which brings great security challenges to the Bitcoin system. Although there are a series of studies against selfish mining behaviors, these works have certain limitations: either the existing protocol needs to be modified or the detection effect for attacks is not satisfactory. We propose the ForkDec, a high-accuracy system for selfish mining detection based on the fully connected neural network, for the purpose of effectively deterring selfish attackers. The neural network contains a total of 100 neurons (10 hidden layers and 10 neurons per layer), learned on a training set containing about 200,000 fork samples. The data set, used to train the model, is generated by a Bitcoin mining simulator that we preconstructed. We also applied ForkDec to the test set to evaluate the attack detection and achieved a detection accuracy of 99.03%. The evaluation experiment demonstrates that ForkDec has certain application value and excellent research prospects.


2019 ◽  
Vol 9 (16) ◽  
pp. 3355 ◽  
Author(s):  
Min Seop Lee ◽  
Yun Kyu Lee ◽  
Dong Sung Pae ◽  
Myo Taeg Lim ◽  
Dong Won Kim ◽  
...  

Physiological signals contain considerable information regarding emotions. This paper investigated the ability of photoplethysmogram (PPG) signals to recognize emotion, adopting a two-dimensional emotion model based on valence and arousal to represent human feelings. The main purpose was to recognize short term emotion using a single PPG signal pulse. We used a one-dimensional convolutional neural network (1D CNN) to extract PPG signal features to classify the valence and arousal. We split the PPG signal into a single 1.1 s pulse and normalized it for input to the neural network based on the personal maximum and minimum values. We chose the dataset for emotion analysis using physiological (DEAP) signals for the experiment and tested the 1D CNN as a binary classification (high or low valence and arousal), achieving the short-term emotion recognition of 1.1 s with 75.3% and 76.2% valence and arousal accuracies, respectively, on the DEAP data.


2021 ◽  
Author(s):  
O.N. Cheremisinova ◽  
V.S. Rostovtsev

In any convolutional neural network (CNN), there are hyperparameters - parameters that are not configured during training, but are set at the time of building the СNN model. Their choice affects the quality of the neural network. To date, there are no uniform rules for setting parameters. Hyperparameters can be adjusted fairly accurately using manual tuning. There are also automatic methods for optimizing hyperparameters. Their use reduces the complexity of the neural network tuning, and does not require experience and knowledge of hyperparameter optimization. The purpose of this article is to analyze automatic methods for selecting hyperparameters to reduce the complexity of the process of tuning a CNN. Optimization methods. Several automatic methods for selecting hyperparameters are considered: grid search, random search, modelbased optimization (Bayesian and evolutionary). The most promising are methods based on a certain model. These methods are used in the absence of an expression for the objective optimization function, but it is possible to obtain its observations (possibly with noise) for the selected values. Bayesian theory involves finding a trade-off between exploration (suggesting hyperparameters with high uncertainty that can give a noticeable improvement) and use (suggesting hyperparameters that are likely to work as well as what she has seen before – usually values that are very close to those observed before). Evolutionary optimization is based on the principle of genetic algorithms. A combination of hyperparameter values is taken as an individual of a population, and recognition accuracy on a test sample is taken as a fitness function. By crossing, mutation and selection, the optimal values of the neural network hyperparameters are selected. The authors have proposed a hybrid method, the algorithm of which combines Bayesian and evolutionary optimization. At the beginning, the neural network is tuned using the Bayesian method, then the first generation in the evolutionary method is formed from the N best options of parameters, which further continues the neural network tuning. An experimental study of the optimization of hyperparameters of a convolutional neural network by Bayesian, evolutionary and hybrid methods is carried out. In the process of optimization by the Bayesian method, 112 different architectures of the convolutional neural network were considered, the root-mean-square error on the validation set of which ranged from 1629 to 11503. As a result, the CNN with the smallest error was selected, the RMSE of which on the test data was 55. At the beginning of evolutionary optimization, they were randomly 8 different CNN architectures were generated with the root mean square error on the validation data from 2587 to 3684. In the process of optimization by this method, within 14 generations, CNNs were obtained with new sets of hyperparameters, the error on the validation data of which decreased to values from 1424 to 1812. As a result, the CNN with the smallest error was selected, the RMSE of which was 48 on the test data. The hybrid method combines the advantages of both methods and allows finding an architecture no worse than the Bayesian and evolutionary methods. When optimizing by this method, the optimal architecture of the CNN was obtained (the architecture in which the CNN on the validation data has the smallest root-mean-square error), the RMSE of which on the test data was 49. The results show that the quality of optimization for all three methods is approximately the same. Bayesian approach considers the entire hyperparameter space. To obtain greater accuracy with the Bayesian method, you need to increase the CNN optimization time with this method. The evolutionary algorithm selects the best combinations of hyperparameters from the initial population, so the initially generated population plays a big role. In addition, due to the peculiarities of the algorithm, this method is prone to falling into a local extremum. However, this algorithm is well parallelized, so the optimization process with this method can be accelerated. The hybrid method combines the advantages of both methods and allows you to find an architecture that is no worse than Bayesian and evolutionary methods. The experiments carried out show that the considered optimization methods on problems similar to the one considered will achieve approximately the same quality of neural network tuning with a relatively small size of the CNN. The presented results make it possible to choose one of the considered methods for optimizing hyperparameters when developing a CNN, based on the specifics of the problem being solved and the available resources.


2020 ◽  
Vol 83 (6) ◽  
pp. 602-614
Author(s):  
Hidir Selcuk Nogay ◽  
Hojjat Adeli

<b><i>Introduction:</i></b> The diagnosis of epilepsy takes a certain process, depending entirely on the attending physician. However, the human factor may cause erroneous diagnosis in the analysis of the EEG signal. In the past 2 decades, many advanced signal processing and machine learning methods have been developed for the detection of epileptic seizures. However, many of these methods require large data sets and complex operations. <b><i>Methods:</i></b> In this study, an end-to-end machine learning model is presented for detection of epileptic seizure using the pretrained deep two-dimensional convolutional neural network (CNN) and the concept of transfer learning. The EEG signal is converted directly into visual data with a spectrogram and used directly as input data. <b><i>Results:</i></b> The authors analyzed the results of the training of the proposed pretrained AlexNet CNN model. Both binary and ternary classifications were performed without any extra procedure such as feature extraction. By performing data set creation from short-term spectrogram graphic images, the authors were able to achieve 100% accuracy for binary classification for epileptic seizure detection and 100% for ternary classification. <b><i>Discussion/Conclusion:</i></b> The proposed automatic identification and classification model can help in the early diagnosis of epilepsy, thus providing the opportunity for effective early treatment.


2020 ◽  
Vol 500 (2) ◽  
pp. 1633-1644
Author(s):  
Róbert Beck ◽  
István Szapudi ◽  
Heather Flewelling ◽  
Conrad Holmberg ◽  
Eugene Magnier ◽  
...  

ABSTRACT The Pan-STARRS1 (PS1) 3π survey is a comprehensive optical imaging survey of three quarters of the sky in the grizy broad-band photometric filters. We present the methodology used in assembling the source classification and photometric redshift (photo-z) catalogue for PS1 3π Data Release 1, titled Pan-STARRS1 Source Types and Redshifts with Machine learning (PS1-STRM). For both main data products, we use neural network architectures, trained on a compilation of public spectroscopic measurements that has been cross-matched with PS1 sources. We quantify the parameter space coverage of our training data set, and flag extrapolation using self-organizing maps. We perform a Monte Carlo sampling of the photometry to estimate photo-z uncertainty. The final catalogue contains 2902 054 648 objects. On our validation data set, for non-extrapolated sources, we achieve an overall classification accuracy of $98.1{{\ \rm per\ cent}}$ for galaxies, $97.8{{\ \rm per\ cent}}$ for stars, and $96.6{{\ \rm per\ cent}}$ for quasars. Regarding the galaxy photo-z estimation, we attain an overall bias of 〈Δznorm〉 = 0.0005, a standard deviation of σ(Δznorm) = 0.0322, a median absolute deviation of MAD(Δznorm) = 0.0161, and an outlier fraction of $P\left(|\Delta z_{\mathrm{norm}}|\gt 0.15\right)=1.89{{\ \rm per\ cent}}$. The catalogue will be made available as a high-level science product via the Mikulski Archive for Space Telescopes.


2021 ◽  
pp. 1-12
Author(s):  
Qian Wang ◽  
Wenfang Zhao ◽  
Jiadong Ren

Intrusion Detection System (IDS) can reduce the losses caused by intrusion behaviors and protect users’ information security. The effectiveness of IDS depends on the performance of the algorithm used in identifying intrusions. And traditional machine learning algorithms are limited to deal with the intrusion data with the characteristics of high-dimensionality, nonlinearity and imbalance. Therefore, this paper proposes an Intrusion Detection algorithm based on Image Enhanced Convolutional Neural Network (ID-IE-CNN). Firstly, based on the image processing technology of deep learning, oversampling method is used to increase the amount of original data to achieve data balance. Secondly, the one-dimensional data is converted into two-dimensional image data, the convolutional layer and the pooling layer are used to extract the main features of the image to reduce the data dimensionality. Third, the Tanh function is introduced as an activation function to fit nonlinear data, a fully connected layer is used to integrate local information, and the generalization ability of the prediction model is improved by the Dropout method. Finally, the Softmax classifier is used to predict the behavior of intrusion detection. This paper uses the KDDCup99 data set and compares with other competitive algorithms. Both in the performance of binary classification and multi-classification, ID-IE-CNN is better than the compared algorithms, which verifies its superiority.


2013 ◽  
Vol 373-375 ◽  
pp. 1212-1219
Author(s):  
Afrias Sarotama ◽  
Benyamin Kusumoputro

A good model is necessary in order to design a controller of a system off-line. It is especially beneficial in the implementation of new advanced control schemes in Unmanned Aerial Vehicle (UAV). Considering the safety and benefit of an off-line tuning of the UAV controllers, this paper identifies a dynamic MIMO UAV nonlinear system which is derived based on the collection of input-output data taken from a test flights (36250 samples data). These input-output sample flight data are grouped into two flight data sets. The first flight data set, a chirp signal, is used for training the neural network in order to determine parameters (weights) for the network. Validation of the network is performed using the second data set, which is not used for training, and is a representation of UAV circular flight movement. An artificial neural network is trained using the training data set and thereafter the network is excited by the second set input data set. The predicted outputs based on our proposed Neural Network model is similar to the desired outputs (roll, pitch and yaw) which has been produced by real UAV system.


Sign in / Sign up

Export Citation Format

Share Document