An innovative method for cardiovascular disease detection based on nonlinear geometric features and feature reduction combination

2021 ◽  
Vol 15 (1) ◽  
pp. 45-57
Author(s):  
Abdolkarim Saeedi ◽  
Mohammad Karimi Moridani ◽  
Alireza Azizi

Cardiovascular is arguably the most dominant death cause in the world. Heart functionality can be measured in various ways. Heart sounds are usually inspected in these experiments as they can unveil a variety of heart related diseases. This study tackles the lack of reliable models and high training times on a publicly available dataset. The heart sound set is provided by Physionet consisting of 3153 recordings, from which five seconds were fixed to evaluate to the developed method. In this work, we propose a novel method based on feature reduction combination, using Genetic Algorithm (GA) and Principal Component Analysis (PCA). The authors present eight dominant features in heart sound classification: mean duration of systole interval, the standard deviation of diastole interval, the absolute amplitude ratio of diastole to S2, S1 to systole and S1 to diastole, zero crossings, Centroid to Centroid distance (CCdis) and mean power in the 95–295 Hz range. These reduced features are then optimized respectively with two straightforward classification algorithms weighted k-NN with a lower-dimensional feature space and Linear SVM that uses a linear combination of all features to create a robust model, acquiring up to 98.15% accuracy, holding the best stats in the heart sound classification on a largely used dataset. According to the experiments done in this study, the developed method can be further explored for real world heart sound assessments.

Open Physics ◽  
2019 ◽  
Vol 17 (1) ◽  
pp. 489-496 ◽  
Author(s):  
Agnieszka Wosiak

Abstract Due to the growing problem of heart diseases, the computer improvement of their diagnostics becomes of great importance. One of the most common heart diseases is cardiac arrhythmia. It is usually diagnosed by measuring the heart activity using electrocardiograph (ECG) and collecting the data as multidimensional medical datasets. However, their storage, analysis and knowledge extraction become highly complex issues. Feature reduction not only enables saving storage and computing resources, but it primarily makes the process of data interpretation more comprehensive. In the paper the new igPCA (in-group Principal Component Analysis) method for feature reduction is proposed. We assume that the set of attributes can be split into subgroups of similar characteristic and then subjected to principal component analysis. The presented method transforms the feature space into a lower dimension and gives the insight into intrinsic structure of data. The method has been verified by experiments done on a dataset of ECG recordings. The obtained effects have been evaluated regarding the number of kept features and classification accuracy of arrhythmia types. Experiment results showed the advantage of the presented method compared to base PCA approach.


2019 ◽  
Vol 70 (4) ◽  
pp. 259-272
Author(s):  
Mohammad Adiban ◽  
Bagher BabaAli ◽  
Saeedreza Shehnepoor

Abstract Cardiovascular Disease (CVD) is considered as one of the principal causes of death in the world. Over recent years, this field of study has attracted researchers’ attention to investigate heart sounds’ patterns for disease diagnostics. In this study, an approach is proposed for normal/abnormal heart sound classification on the Physionet challenge 2016 dataset. For the first time, a fixed length feature vector; called i-vector; is extracted from each heart sound using Mel Frequency Cepstral Coefficient (MFCC) features. Afterwards, Principal Component Analysis (PCA) transform and Variational Autoencoder (VAE) are applied on the i-vector to achieve dimension reduction. Eventually, the reduced size vector is fed to Gaussian Mixture Models (GMMs) and Support Vector Machine (SVM) for classification purpose. Experimental results demonstrate the proposed method could achieve a performance improvement of 16% based on Modified Accuracy (MAcc) compared with the baseline system on the Physionet2016 dataset.


2018 ◽  
Vol 246 ◽  
pp. 03046
Author(s):  
Yongfeng Qi ◽  
Xujie Yang

The Hyperspectral image classification is an important issue, which has been pursued in recent year. The field of application involves many aspects of life. Hyperspectral images (HSIs) exhibit a limited number of labeled high-dimensional training samples, which limits the performance of some classification methods on feature extraction or feature reduction. In the paper, we propose a supervised method based on the PCA network (PCANet) and linear SVM for HSIs classification. We used PCANet (principal component analysis network) to learn the character features. We verified the influence of these parameters on the performance of PCANet by modifying the key parameters of the experiment. We carry out extensive experiments on India pines dataset. The results demonstrate that our method significantly outperforms PCA+KNN methods . And the results not only are optimistic but also the recognition rate can reach 94.29%. At last, we compared the experimental results of the same algorithm on different data sets and so on.


2021 ◽  
Author(s):  
George Zhou ◽  
Yunchan Chen ◽  
Candace Chien

Abstract Background: The application of machine learning to cardiac auscultation has the potential to improve the accuracy and efficiency of both routine and point-of-care screenings. The use of Convolutional Neural Networks (CNN) on heart sound spectrograms in particular has defined state-of-the-art performance. However, the relative paucity of patient data remains a significant barrier to creating models that can adapt to the wide range of between-subject variability. To that end, we examined a CNN model’s performance on automated heart sound classification, before and after various forms of data augmentation, and aimed to identify the most optimal augmentation methods for cardiac spectrogram analysis.Results: We built a standard CNN model to classify cardiac sound recordings as either normal or abnormal. The baseline control model achieved an ROC AUC of 0.945±0.016. Among the data augmentation techniques explored, horizontal flipping of the spectrogram image improved the model performance the most, with an ROC AUC of 0.957±0.009. Principal component analysis color augmentation (PCA) and perturbations of saturation-value (SV) of the hue-saturation-value (HSV) color scale achieved an ROC AUC of 0.949±0.014 and 0.946±0.019, respectively. Time and frequency masking resulted in an ROC AUC of 0.948±0.012. Pitch shifting, time stretching and compressing, noise injection, vertical flipping, and applying random color filters all negatively impacted model performance.Conclusion: Data augmentation can improve classification accuracy by expanding and diversifying the dataset, which protects against overfitting to random variance. However, data augmentation is necessarily domain specific. For example, methods like noise injection have found success in other areas of automated sound classification, but in the context of cardiac sound analysis, noise injection can mimic the presence of murmurs and worsen model performance. Thus, care should be taken to ensure clinically appropriate forms of data augmentation to avoid negatively impacting model performance.


2021 ◽  
Vol 13 (3) ◽  
pp. 526
Author(s):  
Shengliang Pu ◽  
Yuanfeng Wu ◽  
Xu Sun ◽  
Xiaotong Sun

The nascent graph representation learning has shown superiority for resolving graph data. Compared to conventional convolutional neural networks, graph-based deep learning has the advantages of illustrating class boundaries and modeling feature relationships. Faced with hyperspectral image (HSI) classification, the priority problem might be how to convert hyperspectral data into irregular domains from regular grids. In this regard, we present a novel method that performs the localized graph convolutional filtering on HSIs based on spectral graph theory. First, we conducted principal component analysis (PCA) preprocessing to create localized hyperspectral data cubes with unsupervised feature reduction. These feature cubes combined with localized adjacent matrices were fed into the popular graph convolution network in a standard supervised learning paradigm. Finally, we succeeded in analyzing diversified land covers by considering local graph structure with graph convolutional filtering. Experiments on real hyperspectral datasets demonstrated that the presented method offers promising classification performance compared with other popular competitors.


2020 ◽  
pp. 1-11
Author(s):  
Mayamin Hamid Raha ◽  
Tonmoay Deb ◽  
Mahieyin Rahmun ◽  
Tim Chen

Face recognition is the most efficient image analysis application, and the reduction of dimensionality is an essential requirement. The curse of dimensionality occurs with the increase in dimensionality, the sample density decreases exponentially. Dimensionality Reduction is the process of taking into account the dimensionality of the feature space by obtaining a set of principal features. The purpose of this manuscript is to demonstrate a comparative study of Principal Component Analysis and Linear Discriminant Analysis methods which are two of the highly popular appearance-based face recognition projection methods. PCA creates a flat dimensional data representation that describes as much data variance as possible, while LDA finds the vectors that best discriminate between classes in the underlying space. The main idea of PCA is to transform high dimensional input space into the function space that displays the maximum variance. Traditional LDA feature selection is obtained by maximizing class differences and minimizing class distance.


Author(s):  
S. Shanawaz Basha ◽  
N. Musrat Sultana

Biometrics refers to the automatic recognition of individuals based on their physiological and/or behavioral characteristics, such as faces, finger prints, iris, and gait. In this paper, we focus on the application of finger print recognition system. The spectral minutiae fingerprint recognition is a method to represent a minutiae set as a fixedlength feature vector, which is invariant to translation, and in which rotation and scaling become translations, so that they can be easily compensated for. Based on the spectral minutiae features, this paper introduces two feature reduction algorithms: the Column Principal Component Analysis and the Line Discrete Fourier Transform feature reductions, which can efficiently compress the template size with a reduction rate of 94%.With reduced features, we can also achieve a fast minutiae-based matching algorithm. This paper presents the performance of the spectral minutiae fingerprint recognition system, this fast operation renders our system suitable for a large-scale fingerprint identification system, thus significantly reducing the time to perform matching, especially in systems like, police patrolling, airports etc,. The spectral minutiae representation system tends to significantly reduce the false acceptance rate with a marginal increase in the false rejection rate.


Author(s):  
Roilhi Frajo Ibarra-Hernández ◽  
Nancy Bertin ◽  
Miguel Angel Alonso-Arévalo ◽  
Hugo Armando Guillén-Ramírez

Sign in / Sign up

Export Citation Format

Share Document