scholarly journals Short-segment Heart Sound Classification Using an Ensemble of Deep Convolutional Neural Networks

Author(s):  
Fuad Noman ◽  
Chee-Ming Ting ◽  
Sh-Hussain Salleh ◽  
Hernando Ombao
2019 ◽  
Vol 9 (8) ◽  
pp. 1692-1704
Author(s):  
Wei Chen ◽  
Qiang Sun ◽  
Jue Wang ◽  
Huiqun Wu ◽  
Hui Zhou ◽  
...  

Most current automated phonocardiogram (PCG) classification methods are relied on PCG segmentation. It is universal to make use of the segmented PCG signals and then extract efficiency features for computer-aided auscultation or heart sound classification. However, the accurate segmentation of the fundamental heart sounds depends greatly on the quality of the heart sound signals. In addition these methods that heavily relied on segmentation algorithm considerably increase the computational burden. To solve above two issues, we have developed a novel approach to classify normal and abnormal cardiac diseases with un-segmented PCG signals. A deep Convolutional Neural Networks (DCNNs) method is proposed for recognizing normal and abnormal cardiac diseases. In the proposed method, one-dimensional heart sound signals are first converted into twodimensional feature maps which have three channels and each of them represents Mel-frequency spectral coefficients (MFSC) features including static, delta and delta–delta. These artificial images are then fed to the proposed DCNNs to train and evaluate normal and abnormal heart sound signals. We combined the method of majority vote strategy to finally obtain the category of PCG signals. Sensitivity (Se), Specificity (Sp) and Mean accuracy (MAcc) are used as the evaluation metrics. Results: Experiments demonstrated that our approach achieved a significant improvement, with the high Se, Sp, and MAcc of 92.73%, 96.90% and 94.81% respectively. The proposed method improves the MAcc by 5.63% compared with the best result in the CinC Challenge 2016. In addition, it has better robustness performance when applying for the long heart sounds. The proposed DCNNs-based method can achieve the best accuracy performance on recognizing normal and abnormal heart sounds without the preprocessing of segmental algorithm. It significantly improves the classification performance compared with the current state-of-art algorithm.


2019 ◽  
Vol 25 (3) ◽  
pp. 71-76 ◽  
Author(s):  
Grega Vrbancic ◽  
Iztok Jr. Fister ◽  
Vili Podgorelec

The analysis of non-stationary signals commonly includes the signal segmentation process, dividing such signals into smaller time series, which are considered stationary and thus easier to process. Most commonly, the methods for signal segmentation utilize complex filtering, transformation and feature extraction techniques together with various kinds of classifiers, which especially in the field of biomedical signals, do not perform very well and are generally prone to poor performance when dealing with signals obtained in highly variable environments. In order to address these problems, we designed a new method for the segmentation of heart sound signals using deep convolutional neural networks, which works in a straightforward automatic manner and does not require any complex pre-processing. The proposed method was tested on a set of heartbeat sound clips, collected by non-experts with mobile devices in highly variable environments with excessive background noise. The obtained results show that the proposed method outperforms other methods, which are taking advantage of using domain knowledge for the analysis of the signals. Based on the encouraging experimental results, we believe that the proposed method can be considered as a solid basis for the further development of the automatic segmentation of highly variable signals using deep neural networks.


2021 ◽  
Author(s):  
George Zhou ◽  
Yunchan Chen ◽  
Candace Chien

Abstract Background: The application of machine learning to cardiac auscultation has the potential to improve the accuracy and efficiency of both routine and point-of-care screenings. The use of Convolutional Neural Networks (CNN) on heart sound spectrograms in particular has defined state-of-the-art performance. However, the relative paucity of patient data remains a significant barrier to creating models that can adapt to the wide range of between-subject variability. To that end, we examined a CNN model’s performance on automated heart sound classification, before and after various forms of data augmentation, and aimed to identify the most optimal augmentation methods for cardiac spectrogram analysis.Results: We built a standard CNN model to classify cardiac sound recordings as either normal or abnormal. The baseline control model achieved an ROC AUC of 0.945±0.016. Among the data augmentation techniques explored, horizontal flipping of the spectrogram image improved the model performance the most, with an ROC AUC of 0.957±0.009. Principal component analysis color augmentation (PCA) and perturbations of saturation-value (SV) of the hue-saturation-value (HSV) color scale achieved an ROC AUC of 0.949±0.014 and 0.946±0.019, respectively. Time and frequency masking resulted in an ROC AUC of 0.948±0.012. Pitch shifting, time stretching and compressing, noise injection, vertical flipping, and applying random color filters all negatively impacted model performance.Conclusion: Data augmentation can improve classification accuracy by expanding and diversifying the dataset, which protects against overfitting to random variance. However, data augmentation is necessarily domain specific. For example, methods like noise injection have found success in other areas of automated sound classification, but in the context of cardiac sound analysis, noise injection can mimic the presence of murmurs and worsen model performance. Thus, care should be taken to ensure clinically appropriate forms of data augmentation to avoid negatively impacting model performance.


Sign in / Sign up

Export Citation Format

Share Document