scholarly journals Towards the Interpretation of Sound Measurements from Smartphones Collected with Mobile Crowdsensing in the Healthcare Domain: An Experiment with Android Devices

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 170
Author(s):  
Robin Kraft ◽  
Manfred Reichert ◽  
Rüdiger Pryss

The ubiquity of mobile devices fosters the combined use of ecological momentary assessments (EMA) and mobile crowdsensing (MCS) in the field of healthcare. This combination not only allows researchers to collect ecologically valid data, but also to use smartphone sensors to capture the context in which these data are collected. The TrackYourTinnitus (TYT) platform uses EMA to track users’ individual subjective tinnitus perception and MCS to capture an objective environmental sound level while the EMA questionnaire is filled in. However, the sound level data cannot be used directly among the different smartphones used by TYT users, since uncalibrated raw values are stored. This work describes an approach towards making these values comparable. In the described setting, the evaluation of sensor measurements from different smartphone users becomes increasingly prevalent. Therefore, the shown approach can be also considered as a more general solution as it not only shows how it helped to interpret TYT sound level data, but may also stimulate other researchers, especially those who need to interpret sensor data in a similar setting. Altogether, the approach will show that measuring sound levels with mobile devices is possible in healthcare scenarios, but there are many challenges to ensuring that the measured values are interpretable.

2021 ◽  
Vol 263 (5) ◽  
pp. 1645-1651
Author(s):  
Jared Paine ◽  
Lily M. Wang

Sound level data and occupancy data has been logged in five restaurants by the research team at the University of Nebraska - Lincoln. Sound levels and Occupancy at 10 second intervals were documented over time periods of two to four hours during active business hours. Noise levels were logged with dosimeters distributed throughout each restaurant, and occupancy was obtained from images recorded by infrared cameras. Previous analyses of this data have focused on average sound levels and statistical metrics, such as L10 and L90 values. This presentation focuses on each restaurant's Acoustical Capacity and Quality of Verbal Communication, as introduced by Rindel (2012). Acoustical Capacity is a metric describing the maximum number of persons for reasonable communication in a space, calculated from the unoccupied reverberation time and the volume of the space. Quality of Verbal Communication is a metric describing the ease with which persons in the space can communicate at a singular point in time, depending on the reverberation time, the volume of the space, and the number of occupants in the space.


2018 ◽  
Vol 157 ◽  
pp. 02042
Author(s):  
Leszek Radziszewski ◽  
Michał Kekez ◽  
Alžbeta Sapietová

The aim of the paper was to reconstruct the missing data by applying the model which describes variability of sound level in the whole period from 2013 to 2016. To build the model, the computational intelligence methods, like fuzzy systems, or regression trees can be used. The latter approach was applied and we built the model with Cubist regression tree software, using equivalent sound levels recorded in 2013. For the reconstruction of sound level data in short period of time (several days), time series values and day_of_week values together should be used in the training dataset. For the reconstruction of sound level data in long period of time (several months) day_of_week values should be used in the training dataset.


2021 ◽  
Vol 263 (5) ◽  
pp. 1586-1593
Author(s):  
Alice Elizabeth Gonzalez ◽  
Pablo Gianoli Kovar ◽  
Lady Carolina Ramírez ◽  
Micaela Luzardo Rivero

On March 13, 2020, the first cases of SARS-COVID19 were detected in Uruguay. During the first weeks of the pandemic, mobility was significantly reduced with the slogan "If you can, stay home"; it was not a mandatory but voluntary confinement. After a couple of months, there was a big drop in the number of people affected by the disease. Thus, the Municipality of Montevideo, betting on a more human and walkable city, defined that the main avenue of the city had a pedestrian section on Saturday afternoons. This resulted in a greater enjoyment of the city by its inhabitants, as they had more space to walk while maintaining safe distances between people. It was also possible to promote trading, since classically Ave. 18 de Julio is also a commercial stroll. Additionally, the sound pressure levels recorded by the Municipality's stationary sound level meters located at three points along the avenue, showed the reduction of environmental sound levels in pedestrian areas, improving the acoustic quality of the walk. In this paper, sound pressure levels on Saturday afternoons at different times of the year before, during and after the initial lockdown due to the COVID-19 pandemic, are compared and discussed.


2021 ◽  
Vol 11 (1) ◽  
pp. 519-527
Author(s):  
Michał Kekez

Abstract The aim of the paper was to present the methodology of imputation of the missing sound level data, for a period of several months, in many noise monitoring stations located at thoroughfares by applying one model which describes variability of sound level within the tested period. To build the model, at first the proper set of input attributes was elaborated, and training dataset was prepared using recorded equivalent sound levels at one of thoroughfares. Sound level values in the training data were calculated separately for the following 24-hour sub-intervals: day (6–18), evening (18–22) and night (22–6). Next, a computational intelligence approach, called Random Forest was applied to build the model with the aid of Weka software. Later, the scaling functions were elaborated, and the obtained Random Forest model was used to impute data at two other locations in the same city, using these scaling functions. The statistical analysis of the sound levels at the abovementioned locations during the whole year, before and after imputation, was carried out.


2021 ◽  
pp. 175114372110221
Author(s):  
Julie L Darbyshire ◽  
J Duncan Young

Background Intensive care units are significantly louder than WHO guidelines recommend. Patients are disturbed by activities around them and frequently report disrupted sleep. This can lead to slower recovery and long-term health problems. Environmental sound levels are usually reported as LAeq24, a single daily value that reflects mean sound levels over the previous 24-h period. This may not be the most appropriate measure for intensive care units (ICUs) and other similar areas. Humans experience sound in context, and disturbance will vary according to both the individual and acoustic features of the ambient sounds. Loudness is one of a number of measures that approximate the human perception of sound, taking into account tone, duration, and frequency, as well as volume. Typically sounds with higher frequencies, such as alarms, are perceived as louder and more disturbing. Methods Sound level data were collected from a single NHS Trust hospital general adult intensive care unit between October 2016 and May 2018. Summary data (mean sound levels (LAeq) and corresponding Zwicker calculated loudness values) were subsequently analysed by minute, hour, and day. Results The overall mean LAeq24 across the study duration was 47.4 dBA. This varied by microphone location. We identified a clear pattern to sound level fluctuations across the 24-h period. Weekends were significantly quieter than weekdays in statistical terms but this reduction of 0.2 dB is not detectable by human hearing. Peak loudness values over 90 dB were recorded every hour. Conclusions Perception of sound is sensitive to the environment and individual characteristics and sound levels in the ICU are location specific. This has implications for routine environmental monitoring practices. Peak loudness values are consistently between 90 and 100 dB. These may be driven by alarms and other sudden high-frequency sounds, leading to more disturbance than LAeq24 sound levels suggest. Addressing sounds with high loudness values may improve the ICU environment more than an overall reduction in the 24-h mean decibel value.


Methodology ◽  
2018 ◽  
Vol 14 (3) ◽  
pp. 95-108 ◽  
Author(s):  
Steffen Nestler ◽  
Katharina Geukes ◽  
Mitja D. Back

Abstract. The mixed-effects location scale model is an extension of a multilevel model for longitudinal data. It allows covariates to affect both the within-subject variance and the between-subject variance (i.e., the intercept variance) beyond their influence on the means. Typically, the model is applied to two-level data (e.g., the repeated measurements of persons), although researchers are often faced with three-level data (e.g., the repeated measurements of persons within specific situations). Here, we describe an extension of the two-level mixed-effects location scale model to such three-level data. Furthermore, we show how the suggested model can be estimated with Bayesian software, and we present the results of a small simulation study that was conducted to investigate the statistical properties of the suggested approach. Finally, we illustrate the approach by presenting an example from a psychological study that employed ecological momentary assessment.


2021 ◽  
Vol 10 (14) ◽  
pp. 3078
Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Chin-Tuan Tan

In multi-speaker environments, cochlear implant (CI) users may attend to a target sound source in a different manner from normal hearing (NH) individuals during a conversation. This study attempted to investigate the effect of conversational sound levels on the mechanisms adopted by CI and NH listeners in selective auditory attention and how it affects their daily conversation. Nine CI users (five bilateral, three unilateral, and one bimodal) and eight NH listeners participated in this study. The behavioral speech recognition scores were collected using a matrix sentences test, and neural tracking to speech envelope was recorded using electroencephalography (EEG). Speech stimuli were presented at three different levels (75, 65, and 55 dB SPL) in the presence of two maskers from three spatially separated speakers. Different combinations of assisted/impaired hearing modes were evaluated for CI users, and the outcomes were analyzed in three categories: electric hearing only, acoustic hearing only, and electric + acoustic hearing. Our results showed that increasing the conversational sound level degraded the selective auditory attention in electrical hearing. On the other hand, increasing the sound level improved the selective auditory attention for the acoustic hearing group. In the NH listeners, however, increasing the sound level did not cause a significant change in the auditory attention. Our result implies that the effect of the sound level on selective auditory attention varies depending on the hearing modes, and the loudness control is necessary for the ease of attending to the conversation by CI users.


2021 ◽  
Vol 25 (1) ◽  
pp. 39-42
Author(s):  
Shuochao Yao ◽  
Jinyang Li ◽  
Dongxin Liu ◽  
Tianshi Wang ◽  
Shengzhong Liu ◽  
...  

Future mobile and embedded systems will be smarter and more user-friendly. They will perceive the physical environment, understand human context, and interact with end-users in a human-like fashion. Daily objects will be capable of leveraging sensor data to perform complex estimation and recognition tasks, such as recognizing visual inputs, understanding voice commands, tracking objects, and interpreting human actions. This raises important research questions on how to endow low-end embedded and mobile devices with the appearance of intelligence despite their resource limitations.


Sign in / Sign up

Export Citation Format

Share Document