scholarly journals Analysis of Heart-Sound Characteristics during Motion Based on a Graphic Representation

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 181
Author(s):  
Chen-Jun She ◽  
Xie-Feng Cheng ◽  
Kai Wang

In this paper, the graphic representation method is used to study the multiple characteristics of heart sounds from a resting state to a state of motion based on single- and four-channel heart-sound signals. Based on the concept of integration, we explore the representation method of heart sound and blood pressure during motion. To develop a single- and four-channel heart-sound collector, we propose new concepts such as a sound-direction vector of heart sound, a motion–response curve of heart sound, the difference value, and a state-change-trend diagram. Based on the acoustic principle, the reasons for the differences between multiple-channel heart-sound signals are analyzed. Through a comparative analysis of four-channel motion and resting-heart sounds, from a resting state to a state of motion, the maximum and minimum similarity distances in the corresponding state-change-trend graphs were found to be 0.0038 and 0.0006, respectively. In addition, we provide several characteristic parameters that are both sensitive (such as heart sound amplitude, blood pressure, systolic duration, and diastolic duration) and insensitive (such as sound-direction vector, state-change-trend diagram, and difference value) to motion, thus providing a new technique for the diverse analysis of heart sounds in motion.

2017 ◽  
Vol 25 (03) ◽  
pp. 1750014 ◽  
Author(s):  
Lingguang Chen ◽  
Sean F. Wu ◽  
Yong Xu ◽  
William D. Lyman ◽  
Gaurav Kapur

The current standard technique for blood pressure determination is by using cuff/stethoscope, which is not suited for infants or children. Even for adults such an approach yields 60% accuracy with respect to intra-arterial blood pressure measurements. Moreover, it does not allow for continuous monitoring of blood pressure over 24 h and days. In this paper, a new methodology is developed that enables one to calculate the systolic and diastolic blood pressures continuously in a non-invasive manner based on the heart beats measured from the chest of a human being. To this end, we must separate the first and second heart sounds, known as S1 and S2, from the directly measured heart sound signals. Next, the individual characteristics of S1 and S2 must be identified and correlated to the systolic and diastolic blood pressures. It is emphasized that the material properties of a human being are highly inhomogeneous, changing from one organ to another, and the speed at which the heart sound signals propagate inside a human body cannot be determined precisely. Moreover, the exact locations from which the heart sounds are originated are unknown a priori, and must be estimated. As such, the computer model developed here is semi-empirical. Yet, validation results have demonstrated that this semi-empirical computer model can produce relatively robust and accurate calculations of the systolic and diastolic blood pressures with high statistical merits.


Author(s):  
Madhwendra Nath ◽  
Subodh Srivastava ◽  
Niharika Kulshrestha ◽  
Dilbag Singh

Adults born after 1970s are more prone to cardiovascular diseases. Death rate percentage is quite high due to heart related diseases. Therefore, there is necessity to enquire the problem or detection of heart diseases earlier for their proper treatment. As, Valvular heart disease, that is, stenosis and regurgitation of heart valve, are also a major cause of heart failure; which can be diagnosed at early-stage by detection and analysis of heart sound signal, that is, HS signal. In this proposed work, an attempt has been made to detect and localize the major heart sounds, that is, S1 and S2. The work in this article consists of three parts. Firstly, self-acquisition of Phonocardiogram (PCG) and Electrocardiogram (ECG) signal through a self-assembled, data-acquisition set-up. The Phonocardiogram (PCG) signal is acquired from all the four auscultation areas, that is, Aortic, Pulmonic, Tricuspid and Mitral on human chest, using electronic stethoscope. Secondly, the major heart sounds, that is, S1 and S2are detected using 3rd Order Normalized Average Shannon energy Envelope (3rd Order NASE) Algorithm. Further, an auto-thresholding has been used to localize time gates of S1 and S2 and that of R-peaks of simultaneously recorded ECG signal. In third part; the successful detection rate of S1 and S2, from self-acquired PCG signals is computed and compared. A total of 280 samples from same subjects as well as from different subjects (of age group 15–30 years) have been taken in which 70 samples are taken from each auscultation area of human chest. Moreover, simultaneous recording of ECG has also been performed. It was analyzed and observed that detection and localization of S1 and S2 found 74% successful for the self-acquired heart sound signal, if the heart sound data is recorded from pulmonic position of Human chest. The success rate could be much higher, if standard data base of heart sound signal would be used for the same analysis method. The, remaining three auscultations areas, that is, Aortic, Tricuspid, and Mitral have smaller success rate of detection of S1 and S2 from self-acquired PCG signals. So, this work justifies that the Pulmonic position of heart is most suitable auscultation area for acquiring PCG signal for detection and localization of S1 and S2 much accurately and for analysis purpose.


2008 ◽  
Vol 2 (2) ◽  
Author(s):  
Glenn Nordehn ◽  
Spencer Strunic ◽  
Tom Soldner ◽  
Nicholas Karlisch ◽  
Ian Kramer ◽  
...  

Introduction: Cardiac auscultation accuracy is poor: 20% to 40%. Audio-only of 500 heart sounds cycles over a short time period significantly improved auscultation scores. Hypothesis: adding visual information to an audio-only format, significantly (p<.05) improves short and long term accuracy. Methods: Pre-test: Twenty-two 1st and 2nd year medical student participants took an audio-only pre-test. Seven students comprising our audio-only training cohort heard audio-only, of 500 heart sound repetitions. 15 students comprising our paired visual with audio cohort heard and simultaneously watched video spectrograms of the heart sounds. Immediately after trainings, both cohorts took audio-only post-tests; the visual with audio cohort also took a visual with audio post-test, a test providing audio with simultaneous video spectrograms. All tests were repeated in six months. Results: All tests given immediately after trainings showed significant improvement with no significant difference between the cohorts. Six months later neither cohorts maintained significant improvement on audio-only post-tests. Six months later the visual with audio cohort maintained significant improvement (p<.05) on the visual with audio post-test. Conclusions: Audio retention of heart sound recognition is not maintained if: trained using audio-only; or, trained using visual with audio. Providing visual with audio in training and testing allows retention of auscultation accuracy. Devices providing visual information during auscultation could prove beneficial.


2007 ◽  
Vol 07 (02) ◽  
pp. 199-214 ◽  
Author(s):  
S. M. DEBBAL ◽  
F. BEREKSI-REGUIG

This work investigates the study of heartbeat cardiac sounds through time–frequency analysis by using the wavelet transform method. Heart sounds can be utilized more efficiently by medical doctors when they are displayed visually rather through a conventional stethoscope. Heart sounds provide clinicians with valuable diagnostic and prognostic information. Although heart sound analysis by auscultation is convenient as a clinical tool, heart sound signals are so complex and nonstationary that they are very difficult to analyze in the time or frequency domain. We have studied the extraction of features from heart sounds in the time–frequency (TF) domain for the recognition of heart sounds through TF analysis. The application of wavelet transform (WT) for heart sounds is thus described. The performances of discrete wavelet transform (DWT) and wavelet packet transform (WP) are discussed in this paper. After these transformations, we can compare normal and abnormal heart sounds to verify the clinical usefulness of our extraction methods for the recognition of heart sounds.


2020 ◽  
Vol 10 (14) ◽  
pp. 4791 ◽  
Author(s):  
Pedro Narváez ◽  
Steven Gutierrez ◽  
Winston S. Percybrooks

A system for the automatic classification of cardiac sounds can be of great help for doctors in the diagnosis of cardiac diseases. Generally speaking, the main stages of such systems are (i) the pre-processing of the heart sound signal, (ii) the segmentation of the cardiac cycles, (iii) feature extraction and (iv) classification. In this paper, we propose methods for each of these stages. The modified empirical wavelet transform (EWT) and the normalized Shannon average energy are used in pre-processing and automatic segmentation to identify the systolic and diastolic intervals in a heart sound recording; then, six power characteristics are extracted (three for the systole and three for the diastole)—the motivation behind using power features is to achieve a low computational cost to facilitate eventual real-time implementations. Finally, different models of machine learning (support vector machine (SVM), k-nearest neighbor (KNN), random forest and multilayer perceptron) are used to determine the classifier with the best performance. The automatic segmentation method was tested with the heart sounds from the Pascal Challenge database. The results indicated an error (computed as the sum of the differences between manual segmentation labels from the database and the segmentation labels obtained by the proposed algorithm) of 843,440.8 for dataset A and 17,074.1 for dataset B, which are better values than those reported with the state-of-the-art methods. For automatic classification, 805 sample recordings from different databases were used. The best accuracy result was 99.26% using the KNN classifier, with a specificity of 100% and a sensitivity of 98.57%. These results compare favorably with similar works using the state-of-the-art methods.


2019 ◽  
Vol 3 (Supplement_1) ◽  
pp. S112-S113
Author(s):  
Kathy D Wright ◽  
Klatt Maryanna ◽  
Ingrid Adams ◽  
Cady Block ◽  
Todd Monroe ◽  
...  

Abstract The resting state network (RSN) is a target of interest in neurodegenerative research, with evidence linking functional connectivity of its constituent nodes with mild cognitive impairment and dementia. Given the emerging linkage between Alzheimer’s disease and related dementia disorders (ADRD) and hypertension (HTN), non-pharmacological interventions that promote RSN connectivity and blood pressure are needed. The purpose of this pilot study protocol is to deliver a novel intervention, combining mindfulness and the Dietary Approaches to Stop Hypertension (DASH), to improve RSN connectivity and blood pressure in African American (AA) older adults with MCI and HTN. Thirty-six AAs aged 65 and older will be randomized to mindfulness plus DASH, attention control (non-health related education), or a control group. The Mindfulness in Motion (MIM) plus DASH intervention is delivered in 8-weekly group sessions of 6-10 participants. MIM includes mindful movements from chair/standing, breathing exercises and guided meditation. The DASH intervention uses a critical thinking approach that involves problem solving, goal setting, reflection, and developing self-efficacy. Both components are culturally tailored for older African Americans. Cognitive examination, diet and mindfulness practice surveys, blood pressure, and functional magnetic resonance imaging (RSN) data are collected at baseline and 3 months. Forty-eight AAs were screened and 17 were enrolled (women= 13; men= 4) to date. Of the 17 enrolled, 7 were eligible for neuroimaging. Findings from this pilot study may provide the preliminary evidence that MIM plus DASH may improve RSN connectivity and blood pressure in this population at risk for ADRD.


2017 ◽  
Vol 79 (7) ◽  
Author(s):  
I. Nur Fariza ◽  
Sh-Hussain Salleh ◽  
Fuad Noman ◽  
Hadri Hussain

The application of human identification and verification has widely been used for over the past few decades.  Drawbacks of such system however, are inevitable as forgery sophisticatedly developed alongside the technology advancement.  Thus, this study proposed a research on the possibility of using heart sound as biometric. The main aim is to find an optimal auscultation point of heart sounds from either aortic, pulmonic, tricuspid or mitral that will most suitable to be used as the sound pattern for personal identification.  In this study, the heart sound was recorded from 92 participants using a Welch Allyn Meditron electronic stethoscope whereas Meditron Analyzer software was used to capture the signal of heart sounds and ECG simultaneously for duration of 1 minute.  The system is developed by a combination Mel Frequency Cepstrum Coefficients (MFCC) and Hidden Markov Model (HMM).  The highest recognition rate is obtained at aortic area with 98.7% when HMM has 1 state and 32 mixtures, the lowest Equal Error Rate (EER) achieved was 0.9% which is also at aortic area.  In contrast, the best average performance of HMM for every location is obtained at mitral area with 99.1% accuracy and 17.7% accuracy of EER at tricuspid area.


Entropy ◽  
2020 ◽  
Vol 22 (2) ◽  
pp. 238 ◽  
Author(s):  
Xiefeng Cheng ◽  
Pengfei Wang ◽  
Chenjun She

In this paper, a new method of biometric characterization of heart sounds based on multimodal multiscale dispersion entropy is proposed. Firstly, the heart sound is periodically segmented, and then each single-cycle heart sound is decomposed into a group of intrinsic mode functions (IMFs) by improved complete ensemble empirical mode decomposition with adaptive noise (ICEEMDAN). These IMFs are then segmented to a series of frames, which is used to calculate the refine composite multiscale dispersion entropy (RCMDE) as the characteristic representation of heart sound. In the simulation experiments I, carried out on the open heart sounds database Michigan, Washington and Littman, the feature representation method was combined with the heart sound segmentation method based on logistic regression (LR) and hidden semi-Markov models (HSMM), and feature selection was performed through the Fisher ratio (FR). Finally, the Euclidean distance (ED) and the close principle are used for matching and identification, and the recognition accuracy rate was 96.08%. To improve the practical application value of this method, the proposed method was applied to 80 heart sounds database constructed by 40 volunteer heart sounds to discuss the effect of single-cycle heart sounds with different starting positions on performance in experiment II. The experimental results show that the single-cycle heart sound with the starting position of the start of the first heart sound (S1) has the highest recognition rate of 97.5%. In summary, the proposed method is effective for heart sound biometric recognition.


Sign in / Sign up

Export Citation Format

Share Document