Heart Sound Recognition Technology Based on Deep Learning

Author(s):  
Ximing Huai ◽  
Siriaraya Panote ◽  
Dongeun Choi ◽  
Noriaki Kuwahara
2008 ◽  
Vol 2 (2) ◽  
Author(s):  
Glenn Nordehn ◽  
Spencer Strunic ◽  
Tom Soldner ◽  
Nicholas Karlisch ◽  
Ian Kramer ◽  
...  

Introduction: Cardiac auscultation accuracy is poor: 20% to 40%. Audio-only of 500 heart sounds cycles over a short time period significantly improved auscultation scores. Hypothesis: adding visual information to an audio-only format, significantly (p<.05) improves short and long term accuracy. Methods: Pre-test: Twenty-two 1st and 2nd year medical student participants took an audio-only pre-test. Seven students comprising our audio-only training cohort heard audio-only, of 500 heart sound repetitions. 15 students comprising our paired visual with audio cohort heard and simultaneously watched video spectrograms of the heart sounds. Immediately after trainings, both cohorts took audio-only post-tests; the visual with audio cohort also took a visual with audio post-test, a test providing audio with simultaneous video spectrograms. All tests were repeated in six months. Results: All tests given immediately after trainings showed significant improvement with no significant difference between the cohorts. Six months later neither cohorts maintained significant improvement on audio-only post-tests. Six months later the visual with audio cohort maintained significant improvement (p<.05) on the visual with audio post-test. Conclusions: Audio retention of heart sound recognition is not maintained if: trained using audio-only; or, trained using visual with audio. Providing visual with audio in training and testing allows retention of auscultation accuracy. Devices providing visual information during auscultation could prove beneficial.


2019 ◽  
Vol 49 ◽  
pp. 173-180 ◽  
Author(s):  
Yu Tsao ◽  
Tzu-Hao Lin ◽  
Fei Chen ◽  
Yun-Fan Chang ◽  
Chui-Hsuan Cheng ◽  
...  

2018 ◽  
Vol 8 (5) ◽  
pp. 959-968 ◽  
Author(s):  
Lili Chen ◽  
Junlan Ren ◽  
Yaru Hao ◽  
Xue Hu
Keyword(s):  

Sensors ◽  
2021 ◽  
Vol 21 (15) ◽  
pp. 5231
Author(s):  
Guoming Li ◽  
Yijie Xiong ◽  
Qian Du ◽  
Zhengxiang Shi ◽  
Richard S. Gates

Determining ingestive behaviors of dairy cows is critical to evaluate their productivity and health status. The objectives of this research were to (1) develop the relationship between forage species/heights and sound characteristics of three different ingestive behaviors (bites, chews, and chew-bites); (2) comparatively evaluate three deep learning models and optimization strategies for classifying the three behaviors; and (3) examine the ability of deep learning modeling for classifying the three ingestive behaviors under various forage characteristics. The results show that the amplitude and duration of the bite, chew, and chew-bite sounds were mostly larger for tall forages (tall fescue and alfalfa) compared to their counterparts. The long short-term memory network using a filtered dataset with balanced duration and imbalanced audio files offered better performance than its counterparts. The best classification performance was over 0.93, and the best and poorest performance difference was 0.4–0.5 under different forage species and heights. In conclusion, the deep learning technique could classify the dairy cow ingestive behaviors but was unable to differentiate between them under some forage characteristics using acoustic signals. Thus, while the developed tool is useful to support precision dairy cow management, it requires further improvement.


Sign in / Sign up

Export Citation Format

Share Document