scholarly journals Classifying Heart Sound Recordings using Deep Convolutional Neural Networks and Mel:Frequency Cepstral Coefficients

Author(s):  
Jonathan Rubin ◽  
Rui Abreu ◽  
Anurag Ganguli ◽  
Saigopal Nelaturi ◽  
Ion Matei ◽  
...  
2019 ◽  
Vol 9 (8) ◽  
pp. 1692-1704
Author(s):  
Wei Chen ◽  
Qiang Sun ◽  
Jue Wang ◽  
Huiqun Wu ◽  
Hui Zhou ◽  
...  

Most current automated phonocardiogram (PCG) classification methods are relied on PCG segmentation. It is universal to make use of the segmented PCG signals and then extract efficiency features for computer-aided auscultation or heart sound classification. However, the accurate segmentation of the fundamental heart sounds depends greatly on the quality of the heart sound signals. In addition these methods that heavily relied on segmentation algorithm considerably increase the computational burden. To solve above two issues, we have developed a novel approach to classify normal and abnormal cardiac diseases with un-segmented PCG signals. A deep Convolutional Neural Networks (DCNNs) method is proposed for recognizing normal and abnormal cardiac diseases. In the proposed method, one-dimensional heart sound signals are first converted into twodimensional feature maps which have three channels and each of them represents Mel-frequency spectral coefficients (MFSC) features including static, delta and delta–delta. These artificial images are then fed to the proposed DCNNs to train and evaluate normal and abnormal heart sound signals. We combined the method of majority vote strategy to finally obtain the category of PCG signals. Sensitivity (Se), Specificity (Sp) and Mean accuracy (MAcc) are used as the evaluation metrics. Results: Experiments demonstrated that our approach achieved a significant improvement, with the high Se, Sp, and MAcc of 92.73%, 96.90% and 94.81% respectively. The proposed method improves the MAcc by 5.63% compared with the best result in the CinC Challenge 2016. In addition, it has better robustness performance when applying for the long heart sounds. The proposed DCNNs-based method can achieve the best accuracy performance on recognizing normal and abnormal heart sounds without the preprocessing of segmental algorithm. It significantly improves the classification performance compared with the current state-of-art algorithm.


2019 ◽  
Vol 25 (3) ◽  
pp. 71-76 ◽  
Author(s):  
Grega Vrbancic ◽  
Iztok Jr. Fister ◽  
Vili Podgorelec

The analysis of non-stationary signals commonly includes the signal segmentation process, dividing such signals into smaller time series, which are considered stationary and thus easier to process. Most commonly, the methods for signal segmentation utilize complex filtering, transformation and feature extraction techniques together with various kinds of classifiers, which especially in the field of biomedical signals, do not perform very well and are generally prone to poor performance when dealing with signals obtained in highly variable environments. In order to address these problems, we designed a new method for the segmentation of heart sound signals using deep convolutional neural networks, which works in a straightforward automatic manner and does not require any complex pre-processing. The proposed method was tested on a set of heartbeat sound clips, collected by non-experts with mobile devices in highly variable environments with excessive background noise. The obtained results show that the proposed method outperforms other methods, which are taking advantage of using domain knowledge for the analysis of the signals. Based on the encouraging experimental results, we believe that the proposed method can be considered as a solid basis for the further development of the automatic segmentation of highly variable signals using deep neural networks.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


2019 ◽  
Vol 277 ◽  
pp. 02024 ◽  
Author(s):  
Lincan Li ◽  
Tong Jia ◽  
Tianqi Meng ◽  
Yizhe Liu

In this paper, an accurate two-stage deep learning method is proposed to detect vulnerable plaques in ultrasonic images of cardiovascular. Firstly, a Fully Convonutional Neural Network (FCN) named U-Net is used to segment the original Intravascular Optical Coherence Tomography (IVOCT) cardiovascular images. We experiment on different threshold values to find the best threshold for removing noise and background in the original images. Secondly, a modified Faster RCNN is adopted to do precise detection. The modified Faster R-CNN utilize six-scale anchors (122,162,322,642,1282,2562) instead of the conventional one scale or three scale approaches. First, we present three problems in cardiovascular vulnerable plaque diagnosis, then we demonstrate how our method solve these problems. The proposed method in this paper apply deep convolutional neural networks to the whole diagnostic procedure. Test results show the Recall rate, Precision rate, IoU (Intersection-over-Union) rate and Total score are 0.94, 0.885, 0.913 and 0.913 respectively, higher than the 1st team of CCCV2017 Cardiovascular OCT Vulnerable Plaque Detection Challenge. AP of the designed Faster RCNN is 83.4%, higher than conventional approaches which use one-scale or three-scale anchors. These results demonstrate the superior performance of our proposed method and the power of deep learning approaches in diagnose cardiovascular vulnerable plaques.


Sign in / Sign up

Export Citation Format

Share Document