scholarly journals Development and technical validation of a smartphone‐based pediatric cough detection algorithm

2021 ◽  
Author(s):  
Matthijs D Kruizinga ◽  
Ahnjili Zhuparris ◽  
Eva Dessing ◽  
Fas J Krol ◽  
Arwen J Sprij ◽  
...  
Sensors ◽  
2019 ◽  
Vol 19 (6) ◽  
pp. 1357
Author(s):  
Simon Scheurer ◽  
Janina Koch ◽  
Martin Kucera ◽  
Hȧkon Bryn ◽  
Marcel Bärtschi ◽  
...  

Falls are the primary contributors of accidents in elderly people. An important factor of fall severity is the amount of time that people lie on the ground. To minimize consequences through a short reaction time, the motion sensor “AIDE-MOI” was developed. “AIDE-MOI” senses acceleration data and analyzes if an event is a fall. The threshold-based fall detection algorithm was developed using motion data of young subjects collected in a lab setup. The aim of this study was to improve and validate the existing fall detection algorithm. In the two-phase study, twenty subjects (age 86.25 ± 6.66 years) with a high risk of fall (Morse > 65 points) were recruited to record motion data in real-time using the AIDE-MOI sensor. The data collected in the first phase (59 days) was used to optimize the existing algorithm. The optimized second-generation algorithm was evaluated in a second phase (66 days). The data collected in the two phases, which recorded 31 real falls, was split-up into one-minute chunks for labelling as “fall” or “non-fall”. The sensitivity and specificity of the threshold-based algorithm improved significantly from 27.3% to 80.0% and 99.9957% (0.43) to 99.9978% (0.17 false alarms per week and subject), respectively.


2021 ◽  
Vol 9 ◽  
Author(s):  
Ahnjili ZhuParris ◽  
Matthijs D. Kruizinga ◽  
Max van Gent ◽  
Eva Dessing ◽  
Vasileios Exadaktylos ◽  
...  

Introduction: The duration and frequency of crying of an infant can be indicative of its health. Manual tracking and labeling of crying is laborious, subjective, and sometimes inaccurate. The aim of this study was to develop and technically validate a smartphone-based algorithm able to automatically detect crying.Methods: For the development of the algorithm a training dataset containing 897 5-s clips of crying infants and 1,263 clips of non-crying infants and common domestic sounds was assembled from various online sources. OpenSMILE software was used to extract 1,591 audio features per audio clip. A random forest classifying algorithm was fitted to identify crying from non-crying in each audio clip. For the validation of the algorithm, an independent dataset consisting of real-life recordings of 15 infants was used. A 29-min audio clip was analyzed repeatedly and under differing circumstances to determine the intra- and inter- device repeatability and robustness of the algorithm.Results: The algorithm obtained an accuracy of 94% in the training dataset and 99% in the validation dataset. The sensitivity in the validation dataset was 83%, with a specificity of 99% and a positive- and negative predictive value of 75 and 100%, respectively. Reliability of the algorithm appeared to be robust within- and across devices, and the performance was robust to distance from the sound source and barriers between the sound source and the microphone.Conclusion: The algorithm was accurate in detecting cry duration and was robust to various changes in ambient settings.


2019 ◽  
Vol 28 (3) ◽  
pp. 1257-1267 ◽  
Author(s):  
Priya Kucheria ◽  
McKay Moore Sohlberg ◽  
Jason Prideaux ◽  
Stephen Fickas

PurposeAn important predictor of postsecondary academic success is an individual's reading comprehension skills. Postsecondary readers apply a wide range of behavioral strategies to process text for learning purposes. Currently, no tools exist to detect a reader's use of strategies. The primary aim of this study was to develop Read, Understand, Learn, & Excel, an automated tool designed to detect reading strategy use and explore its accuracy in detecting strategies when students read digital, expository text.MethodAn iterative design was used to develop the computer algorithm for detecting 9 reading strategies. Twelve undergraduate students read 2 expository texts that were equated for length and complexity. A human observer documented the strategies employed by each reader, whereas the computer used digital sequences to detect the same strategies. Data were then coded and analyzed to determine agreement between the 2 sources of strategy detection (i.e., the computer and the observer).ResultsAgreement between the computer- and human-coded strategies was 75% or higher for 6 out of the 9 strategies. Only 3 out of the 9 strategies–previewing content, evaluating amount of remaining text, and periodic review and/or iterative summarizing–had less than 60% agreement.ConclusionRead, Understand, Learn, & Excel provides proof of concept that a reader's approach to engaging with academic text can be objectively and automatically captured. Clinical implications and suggestions to improve the sensitivity of the code are discussed.Supplemental Materialhttps://doi.org/10.23641/asha.8204786


2013 ◽  
Vol E96.B (3) ◽  
pp. 910-913 ◽  
Author(s):  
Kilhwan KIM ◽  
Jangyong PARK ◽  
Jihun KOO ◽  
Yongsuk KIM ◽  
Jaeseok KIM

2012 ◽  
Vol E95-B (2) ◽  
pp. 676-679 ◽  
Author(s):  
Guolong CUI ◽  
Lingjiang KONG ◽  
Xiaobo YANG ◽  
Jianyu YANG
Keyword(s):  

Author(s):  
Won-Jae SHIN ◽  
Ki-Won KWON ◽  
Yong-Je WOO ◽  
Hyoungsoo LIM ◽  
Hyoung-Kyu SONG ◽  
...  
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document