cardiac sound
Recently Published Documents


TOTAL DOCUMENTS

56
(FIVE YEARS 4)

H-INDEX

8
(FIVE YEARS 0)

2021 ◽  
Author(s):  
George Zhou ◽  
Yunchan Chen ◽  
Candace Chien

Abstract Background: The application of machine learning to cardiac auscultation has the potential to improve the accuracy and efficiency of both routine and point-of-care screenings. The use of Convolutional Neural Networks (CNN) on heart sound spectrograms in particular has defined state-of-the-art performance. However, the relative paucity of patient data remains a significant barrier to creating models that can adapt to the wide range of between-subject variability. To that end, we examined a CNN model’s performance on automated heart sound classification, before and after various forms of data augmentation, and aimed to identify the most optimal augmentation methods for cardiac spectrogram analysis.Results: We built a standard CNN model to classify cardiac sound recordings as either normal or abnormal. The baseline control model achieved an ROC AUC of 0.945±0.016. Among the data augmentation techniques explored, horizontal flipping of the spectrogram image improved the model performance the most, with an ROC AUC of 0.957±0.009. Principal component analysis color augmentation (PCA) and perturbations of saturation-value (SV) of the hue-saturation-value (HSV) color scale achieved an ROC AUC of 0.949±0.014 and 0.946±0.019, respectively. Time and frequency masking resulted in an ROC AUC of 0.948±0.012. Pitch shifting, time stretching and compressing, noise injection, vertical flipping, and applying random color filters all negatively impacted model performance.Conclusion: Data augmentation can improve classification accuracy by expanding and diversifying the dataset, which protects against overfitting to random variance. However, data augmentation is necessarily domain specific. For example, methods like noise injection have found success in other areas of automated sound classification, but in the context of cardiac sound analysis, noise injection can mimic the presence of murmurs and worsen model performance. Thus, care should be taken to ensure clinically appropriate forms of data augmentation to avoid negatively impacting model performance.



2021 ◽  
Vol 3 ◽  
Author(s):  
Madhubabu Anumukonda ◽  
Prasadraju Lakkamraju ◽  
Shubhajit Roy Chowdhury

The study focuses on the extraction of cardiac sound components using a multi-channel micro-electromechanical system (MEMS) microphone-based phonocardiography system. The proposed multi-channel phonocardiography system classifies the cardiac sound components using artificial neural networks (ANNs) and synaptic weights that are calculated using the inverse delayed (ID) function model of the neuron. The proposed ANN model was simulated in MATLABR and implemented in a field-programmable gate array (FPGA). The proposed system examined both abnormal and normal samples collected from 30 patients. Experimental results revealed a good sensitivity of 99.1% and an accuracy of 0.9.



2021 ◽  
Vol 69 ◽  
pp. 102836
Author(s):  
Tian Wang ◽  
Meihui Gong ◽  
Xiaoyu Yu ◽  
Guangdong Lan ◽  
Yunbo Shi


2021 ◽  
Vol 27 (1) ◽  
pp. 63-72
Author(s):  
Yettou Nour El Houda Baakek ◽  
Imane Debbal ◽  
Hidayat Boudis ◽  
Sidi Mohammed El Amine Debbal

Abstract This paper presents a study of the impact of clicks, and murmurs on cardiac sound S1, and S2, and the measure of severity degree through synchronization degree between frequencies, using bispectral analysis. The algorithm is applied on three groups of Phonocardiogram (PCG) signal: group A represents PCG signals having a morphology similar to that of the normal PCG signal without click or murmur, group B represents PCG signals with a click (reduced murmur), and group C represent PCG signals with murmurs. The proposed algorithm permits us to evaluate and quantify the relationship between the two sounds S1 and S2 on one hand and between the two sounds, click and murmur on the other hand. The obtained results show that the clicks and murmurs can affect both the heart sounds, and vice versa. This study shows that the heart works in perfect harmony and that the frequencies of sounds S1, S2, clicks, and murmurs are not accidentally generated; but they are generated by the same generator system. It might also suggest that one of the obtained frequencies causes the others. The proposed algorithm permits us also to determine the synchronization degree. It shows high values in group C; indicating high severity degrees, low values for group B, and zero in group A. The algorithm is compared to Short-Time Fourier Transform (STFT) and continuous wavelet transform (CWT) analysis. Although the STFT can provide correctly the time, it can’t distinguish between the internal components of sounds S1 and S2, which are successfully determined by CWT, which, in turn, cannot find the relationship between them. The algorithm was also evaluated and compared to the energetic ratio. the obtained results show very satisfactory results and very good discrimination between the three groups. We can conclude that the three algorithms (STFT, CWT, and bispectral analysis) are complementary to facilitate a good approach and to better understand the cardiac sounds



2020 ◽  
Vol 36 (2) ◽  
pp. 427-458
Author(s):  
El‐Sayed A. El‐Dahshan ◽  
Mahmoud M. Bassiouni
Keyword(s):  


Author(s):  
Norezmi Jamal ◽  
Nabilah Ibrahim ◽  
MNAH Sha’abani ◽  
Zulkifli Taat

<span>This paper presents a preliminary study related to the detection and identification of cardiac sounds components including first sound (S1), second sound (S2) and murmurs. Detection and identification of cardiac sounds are an important process in automated cardiac sound analysis system in order to automatically diagnose people who are having cardiovascular disorder and determine the existence of murmurs. Sixteen of recorded cardiac sounds (eight normal cardiac sounds, four abnormal cardiac sounds with systole murmur, and four abnormal cardiac sounds with diastole murmur) from PASCAL Classifying Heart Sounds Challenge database were examined for analysis. This work is significant in studying the time and time-frequency based detection of cardiac sounds components characteristics. In time-based analysis, envelope of signal energy was used to do the peak detection of S1, S2 and murmur and also analysis of cardiac cycle, systole and diastole duration. While time-frequency based analysis was used to determine the S1, S2 and murmur frequency range. The findings yield the overall accuracy of envelope-based detection for normal cardiac sound signal at 60.85% while for abnormal cardiac sound signal at 57.24%.</span>



Author(s):  
Abhishek Kaushal ◽  
Anjali Yadav ◽  
Malay Kishore Dutta ◽  
Carlos Travieso-González ◽  
Luis Esteban-Hernández


Author(s):  
Keita Nishio ◽  
Takashi Kaburagi ◽  
Satoshi Kumagai ◽  
Toshiyuki Matsumoto ◽  
Yosuke Kurihara


2019 ◽  
Vol 11 (4) ◽  
pp. 245-256
Author(s):  
Zeynep Nesrin Coskun ◽  
Tufan Adıguzel ◽  
Guven Catak

The aim of the study was to validate a prototype of a game-based educational tool for improving auscultation skills. The tool was presented to 12 medical school students studying at a foundation university. The data collection tools of the study were: Cardiac sound identification form, educational tool evaluation form and auscultation survey form. Key findings of the study were: 1—Each medical student increased their identification skills and retention was possible. 2—The most incorrectly identified heart sound was the most correctly identified heart sound after using the tool. 3—Medical students sided with the tool for it is flexible, quicker method of learning and getting feedback, can be used anytime, anywhere without interruption of daily life. 4—Since students felt skillful and epic, in real-World tackling problems, on the mission; saving lives, and competitive, they repeated the content otherwise they would not. 5—The tool created a hype and motivation for further learning. 6—Tool was effective on the users with possible restricted acoustic capability which could imply findings might also be used for improving listening skills and musical ear. Keywords: Stethoscope skills, heart auscultation training, mobile learning, game-based learning, retention.



Sign in / Sign up

Export Citation Format

Share Document