Autoencoder for ecg signal outlier processing in system of biometric authentication

2019 ◽  
Vol 24 (1-2) ◽  
pp. 108-117
Author(s):  
Khoma V.V. ◽  
◽  
Khoma Y.V. ◽  
Khoma P.P. ◽  
Sabodashko D.V. ◽  
...  

A novel method for ECG signal outlier processing based on autoencoder neural networks is presented in the article. Typically, heartbeats with serious waveform distortions are treated as outliers and are skipped from the authentication pipeline. The main idea of the paper is to correct these waveform distortions rather them in order to provide the system with better statistical base. During the experiments, the optimum autoencoder architecture was selected. An open Physionet ECGID database was used to verify the proposed method. The results of the studies were compared with previous studies that considered the correction of anomalies based on a statistical approach. On the one hand, the autoencoder shows slightly lower accuracy than the statistical method, but it greatly simplifies the construction of biometric identification systems, since it does not require precise tuning of hyperparameters.

2012 ◽  
Vol 58 (2) ◽  
pp. 177-192 ◽  
Author(s):  
Marek Parfieniuk ◽  
Alexander Petrovsky

Near-Perfect Reconstruction Oversampled Nonuniform Cosine-Modulated Filter Banks Based on Frequency Warping and Subband MergingA novel method for designing near-perfect reconstruction oversampled nonuniform cosine-modulated filter banks is proposed, which combines frequency warping and subband merging, and thus offers more flexibility than known techniques. On the one hand, desirable frequency partitionings can be better approximated. On the other hand, at the price of only a small loss in partitioning accuracy, both warping strength and number of channels before merging can be adjusted so as to minimize the computational complexity of a system. In particular, the coefficient of the function behind warping can be constrained to be a negative integer power of two, so that multiplications related to allpass filtering can be replaced with more efficient binary shifts. The main idea is accompanied by some contributions to the theory of warped filter banks. Namely, group delay equalization is thoroughly investigated, and it is shown how to avoid significant aliasing by channel oversampling. Our research revolves around filter banks for perceptual processing of sound, which are required to approximate the psychoacoustic scales well and need not guarantee perfect reconstruction.


Energies ◽  
2022 ◽  
Vol 15 (2) ◽  
pp. 588
Author(s):  
Felipe Leite Coelho da Silva ◽  
Kleyton da Costa ◽  
Paulo Canas Rodrigues ◽  
Rodrigo Salas ◽  
Javier Linkolk López-Gonzales

Forecasting the industry’s electricity consumption is essential for energy planning in a given country or region. Thus, this study aims to apply time-series forecasting models (statistical approach and artificial neural network approach) to the industrial electricity consumption in the Brazilian system. For the statistical approach, the Holt–Winters, SARIMA, Dynamic Linear Model, and TBATS (Trigonometric Box–Cox transform, ARMA errors, Trend, and Seasonal components) models were considered. For the approach of artificial neural networks, the NNAR (neural network autoregression) and MLP (multilayer perceptron) models were considered. The results indicate that the MLP model was the one that obtained the best forecasting performance for the electricity consumption of the Brazilian industry under analysis.


2020 ◽  
Vol 10 (9) ◽  
pp. 3304 ◽  
Author(s):  
Eko Ihsanto ◽  
Kalamullah Ramli ◽  
Dodi Sudiana ◽  
Teddy Surya Gunawan

The electrocardiogram (ECG) is relatively easy to acquire and has been used for reliable biometric authentication. Despite growing interest in ECG authentication, there are still two main problems that need to be tackled, i.e., the accuracy and processing speed. Therefore, this paper proposed a fast and accurate ECG authentication utilizing only two stages, i.e., ECG beat detection and classification. By minimizing time-consuming ECG signal pre-processing and feature extraction, our proposed two-stage algorithm can authenticate the ECG signal around 660 μs. Hamilton’s method was used for ECG beat detection, while the Residual Depthwise Separable Convolutional Neural Network (RDSCNN) algorithm was used for classification. It was found that between six and eight ECG beats were required for authentication of different databases. Results showed that our proposed algorithm achieved 100% accuracy when evaluated with 48 patients in the MIT-BIH database and 90 people in the ECG ID database. These results showed that our proposed algorithm outperformed other state-of-the-art methods.


2021 ◽  
Vol 2021 ◽  
pp. 1-7
Author(s):  
Kedong Zhang

The music style classification technology can add style tags to music based on the content. When it comes to researching and implementing aspects like efficient organization, recruitment, and music resource recommendations, it is critical. Traditional music style classification methods use a wide range of acoustic characteristics. The design of characteristics necessitates musical knowledge and the characteristics of various classification tasks are not always consistent. The rapid development of neural networks and big data technology has provided a new way to better solve the problem of music-style classification. This paper proposes a novel method based on music extraction and deep neural networks to address the problem of low accuracy in traditional methods. The music style classification algorithm extracts two types of features as classification characteristics for music styles: timbre and melody features. Because the classification method based on a convolutional neural network ignores the audio’s timing. As a result, we proposed a music classification module based on the one-dimensional convolution of a recurring neuronal network, which we combined with single-dimensional convolution and a two-way, recurrent neural network. To better represent the music style properties, different weights are applied to the output. The GTZAN data set was also subjected to comparison and ablation experiments. The test results outperformed a number of other well-known methods, and the rating performance was competitive.


1979 ◽  
Author(s):  
W. Nieuwenhuizen ◽  
I. A. M. van Ruijven-Vermeer ◽  
F. Haverkate ◽  
G. Timan

A novel method will be described for the preparation and purification of fibrin(ogen) degradation products in high yields. The high yields are due to two factors. on the one hand an improved preparation method in which the size heterogeneity of the degradation products D is strongly reduced by plasmin digestion at well-controlled calcium concentrations. At calcium concentrations of 2mM exclusively D fragments, M.W.= 93-000 (Dcate) were formed; in the presence of 1OmM EGTA only fragments M.W.= 80.000 (D EGTA) were formed as described. on the other hand a new purification method, which includes Sephadex G-200 filtration to purify the D:E complexes and separation of the D and E fragments by a 16 hrs. preparative isoelectric focussing. The latter step gives a complete separation of D (fragments) (pH = 6.5) and E fragments (at pH = 4.5) without any overlap, thus allowing a nearly 100% recovery in this step. The overall recoveries are around 75% of the theoretical values. These recoveries are superior to those of existing procedures. Moreover the conditions of this purification procedure are very mild and probably do not affect the native configuration of the products. Amino-terminal amino acids of human Dcate, D EGTA and D-dimer are identical i.e. val, asx and ser. in the ratgly, asx and ser were found. E 1% for rat Dcate=17-8 for rat D EGTA=16.2 and for rat D- dimer=l8.3. for the corresponding human fragments, these values were all 20.0 ± 0.2.


Author(s):  
J Ph Guillet ◽  
E Pilon ◽  
Y Shimizu ◽  
M S Zidi

Abstract This article is the first of a series of three presenting an alternative method of computing the one-loop scalar integrals. This novel method enjoys a couple of interesting features as compared with the method closely following ’t Hooft and Veltman adopted previously. It directly proceeds in terms of the quantities driving algebraic reduction methods. It applies to the three-point functions and, in a similar way, to the four-point functions. It also extends to complex masses without much complication. Lastly, it extends to kinematics more general than that of the physical, e.g., collider processes relevant at one loop. This last feature may be useful when considering the application of this method beyond one loop using generalized one-loop integrals as building blocks.


2011 ◽  
Vol 464 ◽  
pp. 38-42 ◽  
Author(s):  
Ping Ye ◽  
Gui Rong Weng

This paper proposed a novel method for leaf classification and recognition. In the method, the moment invariant and fractal dimension were regarded as the characteristic parameters of the plant leaf. In order to extract the representative characteristic parameters, pretreatment of the leaf images, including RGB-gray converting, image binarization and leafstalk removing. The extracted leaf characteristic parameters were further utilized as training sets to train the neural networks. The proposed method was proved effectively to reach a recognition rate about 92% for most of the testing leaf samples


2021 ◽  
Vol 11 (8) ◽  
pp. 3563
Author(s):  
Martin Klimo ◽  
Peter Lukáč ◽  
Peter Tarábek

One-hot encoding is the prevalent method used in neural networks to represent multi-class categorical data. Its success stems from its ease of use and interpretability as a probability distribution when accompanied by a softmax activation function. However, one-hot encoding leads to very high dimensional vector representations when the categorical data’s cardinality is high. The Hamming distance in one-hot encoding is equal to two from the coding theory perspective, which does not allow detection or error-correcting capabilities. Binary coding provides more possibilities for encoding categorical data into the output codes, which mitigates the limitations of the one-hot encoding mentioned above. We propose a novel method based on Zadeh fuzzy logic to train binary output codes holistically. We study linear block codes for their possibility of separating class information from the checksum part of the codeword, showing their ability not only to detect recognition errors by calculating non-zero syndrome, but also to evaluate the truth-value of the decision. Experimental results show that the proposed approach achieves similar results as one-hot encoding with a softmax function in terms of accuracy, reliability, and out-of-distribution performance. It suggests a good foundation for future applications, mainly classification tasks with a high number of classes.


2021 ◽  
Vol 29 ◽  
pp. 475-486
Author(s):  
Bohdan Petryshak ◽  
Illia Kachko ◽  
Mykola Maksymenko ◽  
Oles Dobosevych

BACKGROUND: Premature ventricular contraction (PVC) is among the most frequently occurring types of arrhythmias. Existing approaches for automated PVC identification suffer from a range of disadvantages related to hand-crafted features and benchmarking on datasets with a tiny sample of PVC beats. OBJECTIVE: The main objective is to address the drawbacks described above in the proposed framework, which takes a raw ECG signal as an input and localizes R peaks of the PVC beats. METHODS: Our method consists of two neural networks. First, an encoder-decoder architecture trained on PVC-rich dataset localizes the R peak of both Normal and anomalous heartbeats. Provided R peaks positions, our CardioIncNet model does the delineation of healthy versus PVC beats. RESULTS: We have performed an extensive evaluation of our pipeline with both single- and cross-dataset paradigms on three public datasets. Our approach results in over 0.99 and 0.979 F1-measure on both single- and cross-dataset paradigms for R peaks localization task and above 0.96 and 0.85 F1 score for the PVC beats classification task. CONCLUSIONS: We have shown a method that provides robust performance beyond the beats of Normal nature and clearly outperforms classical algorithms both in the case of a single and cross-dataset evaluation. We provide a Github1 repository for the reproduction of the results.


Sign in / Sign up

Export Citation Format

Share Document