Phonocardiogram Classification Using Deep Convolutional Neural Networks with Majority Vote Strategy

2019 ◽  
Vol 9 (8) ◽  
pp. 1692-1704
Author(s):  
Wei Chen ◽  
Qiang Sun ◽  
Jue Wang ◽  
Huiqun Wu ◽  
Hui Zhou ◽  
...  

Most current automated phonocardiogram (PCG) classification methods are relied on PCG segmentation. It is universal to make use of the segmented PCG signals and then extract efficiency features for computer-aided auscultation or heart sound classification. However, the accurate segmentation of the fundamental heart sounds depends greatly on the quality of the heart sound signals. In addition these methods that heavily relied on segmentation algorithm considerably increase the computational burden. To solve above two issues, we have developed a novel approach to classify normal and abnormal cardiac diseases with un-segmented PCG signals. A deep Convolutional Neural Networks (DCNNs) method is proposed for recognizing normal and abnormal cardiac diseases. In the proposed method, one-dimensional heart sound signals are first converted into twodimensional feature maps which have three channels and each of them represents Mel-frequency spectral coefficients (MFSC) features including static, delta and delta–delta. These artificial images are then fed to the proposed DCNNs to train and evaluate normal and abnormal heart sound signals. We combined the method of majority vote strategy to finally obtain the category of PCG signals. Sensitivity (Se), Specificity (Sp) and Mean accuracy (MAcc) are used as the evaluation metrics. Results: Experiments demonstrated that our approach achieved a significant improvement, with the high Se, Sp, and MAcc of 92.73%, 96.90% and 94.81% respectively. The proposed method improves the MAcc by 5.63% compared with the best result in the CinC Challenge 2016. In addition, it has better robustness performance when applying for the long heart sounds. The proposed DCNNs-based method can achieve the best accuracy performance on recognizing normal and abnormal heart sounds without the preprocessing of segmental algorithm. It significantly improves the classification performance compared with the current state-of-art algorithm.

Author(s):  
Chunlei Liu ◽  
Wenrui Ding ◽  
Xin Xia ◽  
Yuan Hu ◽  
Baochang Zhang ◽  
...  

Binarized  convolutional neural networks (BCNNs) are widely used to improve memory and computation efficiency of deep convolutional neural networks (DCNNs) for mobile and AI chips based applications. However, current BCNNs are not able to fully explore their corresponding full-precision models, causing a significant performance gap between them. In this paper, we propose rectified binary convolutional networks (RBCNs), towards optimized BCNNs, by combining full-precision kernels and feature maps to rectify the binarization process in a unified framework. In particular, we use a GAN to train the 1-bit binary network with the guidance of its corresponding full-precision model, which significantly improves the performance of BCNNs. The rectified convolutional layers are generic and flexible, and can be easily incorporated into existing DCNNs such as WideResNets and ResNets. Extensive experiments demonstrate the superior performance of the proposed RBCNs over state-of-the-art BCNNs. In particular, our method shows strong generalization on the object tracking task.


2021 ◽  
Vol 11 (7) ◽  
pp. 3155
Author(s):  
Guo-Shiang Lin ◽  
Kuan-Ting Lai ◽  
Jian-Ming Syu ◽  
Jen-Yung Lin ◽  
Sin-Kuo Chai

In this paper, an efficient instance segmentation scheme based on deep convolutional neural networks is proposed to deal with unconstrained psoriasis images for computer-aided diagnosis. To achieve instance segmentation, the You Only Look At CoefficienTs (YOLACT) network composed of backbone, feature pyramid network (FPN), Protonet, and prediction head is used to deal with psoriasis images. The backbone network is used to extract feature maps from an image, and FPN is designed to generate multiscale feature maps for effectively classifying and localizing objects with multiple sizes. The prediction head is used to predict the classification information, bounding box information, and mask coefficients of objects. Some prototypes generated by Protonet are combined with mask coefficients to estimate the pixel-level shapes for objects. To achieve instance segmentation for unconstrained psoriasis images, YOLACT++ with a pretrained model is retrained via transfer learning. To evaluate the performance of the proposed scheme, unconstrained psoriasis images with different severity levels are collected for testing. As for subjective testing, the psoriasis regions and normal skin areas can be located and classified well. The four performance indices of the proposed scheme were higher than 93% after cross validation. About object localization, the Mean Average Precision (mAP) rates of the proposed scheme were at least 85.9% after cross validation. As for efficiency, the frames per second (FPS) rate of the proposed scheme reached up to 15. In addition, the F1_score and the execution speed of the proposed scheme were higher than those of the Mask Region-Based Convolutional Neural Networks (R-CNN)-based method. These results show that the proposed scheme based on YOLACT++ can not only detect psoriasis regions but also distinguish psoriasis pixels from background and normal skin pixels well. Furthermore, the proposed instance segmentation scheme outperforms the Mask R-CNN-based method for unconstrained psoriasis images.


2019 ◽  
Vol 8 (1) ◽  
pp. 49 ◽  
Author(s):  
Wei Liu ◽  
Dayu Cheng ◽  
Pengcheng Yin ◽  
Mengyuan Yang ◽  
Erzhu Li ◽  
...  

With the development of remote sensing technology and the advent of high-resolution images, obtaining data has become increasingly convenient. However, the acquisition of small manhole cover information still has shortcomings including low efficiency of manual surveying and high leakage rate. Recently, deep learning models, especially deep convolutional neural networks (DCNNs), have proven to be effective at object detection. However, several challenges limit the applications of DCNN in manhole cover object detection using remote sensing imagery: (1) Manhole cover objects often appear at different scales in remotely sensed images and DCNNs’ fixed receptive field cannot match the scale variability of such objects; (2) Manhole cover objects in large-scale remotely-sensed images are relatively small in size and densely packed, while DCNNs have poor localization performance when applied to such objects. To address these problems, we propose an effective method for detecting manhole cover objects in remotely-sensed images. First, we redesign the feature extractor by adopting the visual geometry group (VGG), which can increase the variety of receptive field size. Then, detection is performed using two sub-networks: a multi-scale output network (MON) for manhole cover object-like edge generation from several intermediate layers whose receptive fields match different object scales and a multi-level convolution matching network (M-CMN) for object detection based on fused feature maps, which combines several feature maps that enable small and densely packed manhole cover objects to produce a stronger response. The results show that our method is more accurate than existing methods at detecting manhole covers in remotely-sensed images.


2019 ◽  
Vol 25 (3) ◽  
pp. 71-76 ◽  
Author(s):  
Grega Vrbancic ◽  
Iztok Jr. Fister ◽  
Vili Podgorelec

The analysis of non-stationary signals commonly includes the signal segmentation process, dividing such signals into smaller time series, which are considered stationary and thus easier to process. Most commonly, the methods for signal segmentation utilize complex filtering, transformation and feature extraction techniques together with various kinds of classifiers, which especially in the field of biomedical signals, do not perform very well and are generally prone to poor performance when dealing with signals obtained in highly variable environments. In order to address these problems, we designed a new method for the segmentation of heart sound signals using deep convolutional neural networks, which works in a straightforward automatic manner and does not require any complex pre-processing. The proposed method was tested on a set of heartbeat sound clips, collected by non-experts with mobile devices in highly variable environments with excessive background noise. The obtained results show that the proposed method outperforms other methods, which are taking advantage of using domain knowledge for the analysis of the signals. Based on the encouraging experimental results, we believe that the proposed method can be considered as a solid basis for the further development of the automatic segmentation of highly variable signals using deep neural networks.


2020 ◽  
Vol 2020 (10) ◽  
pp. 28-1-28-7 ◽  
Author(s):  
Kazuki Endo ◽  
Masayuki Tanaka ◽  
Masatoshi Okutomi

Classification of degraded images is very important in practice because images are usually degraded by compression, noise, blurring, etc. Nevertheless, most of the research in image classification only focuses on clean images without any degradation. Some papers have already proposed deep convolutional neural networks composed of an image restoration network and a classification network to classify degraded images. This paper proposes an alternative approach in which we use a degraded image and an additional degradation parameter for classification. The proposed classification network has two inputs which are the degraded image and the degradation parameter. The estimation network of degradation parameters is also incorporated if degradation parameters of degraded images are unknown. The experimental results showed that the proposed method outperforms a straightforward approach where the classification network is trained with degraded images only.


Sign in / Sign up

Export Citation Format

Share Document