scholarly journals The Spatial Selective Auditory Attention of Cochlear Implant Users in Different Conversational Sound Levels

2021 ◽  
Vol 10 (14) ◽  
pp. 3078
Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Chin-Tuan Tan

In multi-speaker environments, cochlear implant (CI) users may attend to a target sound source in a different manner from normal hearing (NH) individuals during a conversation. This study attempted to investigate the effect of conversational sound levels on the mechanisms adopted by CI and NH listeners in selective auditory attention and how it affects their daily conversation. Nine CI users (five bilateral, three unilateral, and one bimodal) and eight NH listeners participated in this study. The behavioral speech recognition scores were collected using a matrix sentences test, and neural tracking to speech envelope was recorded using electroencephalography (EEG). Speech stimuli were presented at three different levels (75, 65, and 55 dB SPL) in the presence of two maskers from three spatially separated speakers. Different combinations of assisted/impaired hearing modes were evaluated for CI users, and the outcomes were analyzed in three categories: electric hearing only, acoustic hearing only, and electric + acoustic hearing. Our results showed that increasing the conversational sound level degraded the selective auditory attention in electrical hearing. On the other hand, increasing the sound level improved the selective auditory attention for the acoustic hearing group. In the NH listeners, however, increasing the sound level did not cause a significant change in the auditory attention. Our result implies that the effect of the sound level on selective auditory attention varies depending on the hearing modes, and the loudness control is necessary for the ease of attending to the conversation by CI users.

2015 ◽  
Vol 26 (05) ◽  
pp. 494-501 ◽  
Author(s):  
Sandra M. Prentiss ◽  
David R. Friedland ◽  
John J. Nash ◽  
Christina L. Runge

Background: Cochlear implants have shown vast improvements in speech understanding for those with severe to profound hearing loss; however, music perception remains a challenge for electric hearing. It is unclear whether the difficulties arise from limitations of sound processing, the nature of a damaged auditory system, or a combination of both. Purpose: To examine music perception performance with different acoustic and electric hearing configurations. Research Design: Chord discrimination and timbre perception were tested in subjects representing four daily-use listening configurations: unilateral cochlear implant (CI), contralateral bimodal (CIHA), bilateral hearing aid (HAHA) and normal-hearing (NH) listeners. A same-different task was used for discrimination of two chords played on piano. Timbre perception was assessed using a 10-instrument forced-choice identification task. Study Sample: Fourteen adults were included in each group, none of whom were professional musicians. Data Collection and Analysis: The number of correct responses was divided by the total number of presentations to calculate scores in percent correct. Data analyses were performed with Kruskal-Wallis one-way analysis of variance and linear regression. Results: Chord discrimination showed a narrow range of performance across groups, with mean scores ranging between 72.5% (CI) and 88.9% (NH). Significant differences were seen between the NH and all hearing-impaired groups. Both the HAHA and CIHA groups performed significantly better than the CI groups, and no significant differences were observed between the HAHA and CIHA groups. Timbre perception was significantly poorer for the hearing-impaired groups (mean scores ranged from 50.3–73.9%) compared to NH (95.2%). Significantly better performance was observed in the HAHA group as compared to both groups with electric hearing (CI and CIHA). There was no significant difference in performance between the CIHA and CI groups. Timbre perception was a significantly more difficult task than chord discrimination for both the CI and CIHA groups, yet the easier task for the NH group. A significant difference between the two tasks was not seen in the HAHA group. Conclusion: Having impaired hearing decreases performance compared to NH across both chord discrimination and timbre perception tasks. For chord discrimination, having acoustic hearing improved performance compared to electric hearing only. Timbre perception distinguished those with acoustic hearing from those with electric hearing. Those with bilateral acoustic hearing, even if damaged, performed significantly better on this task than those requiring electrical stimulation, which may indicate that CI sound processing fails to capture and deliver the necessary acoustic cues for timbre perception. Further analysis of timbre characteristics in electric hearing may contribute to advancements in programming strategies to obtain optimal hearing outcomes.


2020 ◽  
Vol 10 (1) ◽  
Author(s):  
Piotr F. Czempik ◽  
Agnieszka Jarosińska ◽  
Krystyna Machlowska ◽  
Michał P. Pluta

Abstract Sleep disruption is common in patients in the intensive care unit (ICU). The aim of the study was to measure sound levels during sleep-protected time in the ICU, determine sources of sound, assess the impact of sound levels and patient-related factors on duration and quality of patients' sleep. The study was performed between 2018 and 2019. A commercially available smartphone application was used to measure ambient sound levels. Sleep duration was measured using the Patient's Sleep Behaviour Observational Tool. Sleep quality was assessed using the Richards-Campbell Sleep Questionnaire (RCSQ). The study population comprised 18 (58%) men and 13 (42%) women. There were numerous sources of sound. The median duration of sleep was 5 (IQR 3.5–5.7) hours. The median score on the RCSQ was 49 (IQR 28–71) out of 100 points. Sound levels were negatively correlated with sleep duration. The cut-off peak sound level, above which sleep duration was shorter than mean sleep duration in the cohort, was 57.9 dB. Simple smartphone applications can be useful to estimate sound levels in the ICU. There are numerous sources of sound in the ICU. Individual units should identify and eliminate their own sources of sound. Sources of sound producing peak sound levels above 57.9 dB may lead to shorter sleep and should be eliminated from the ICU environment. The sound levels had no effect on sleep quality.


Author(s):  
Todd D. Hollander ◽  
Michael S. Wogalter

Signal words, such as DANGER and WARNING have been used in print (visual) warnings with the intention of evoking different levels of perceived hazard. However, there is limited research on whether auditory presentation of these words connote different levels of perceived hazard. In the present study, five voiced signal words were used to produce sound clips each composed of the words spoken three times and were manipulated according to the following factors: speaker gender, word unit duration (fast, slow), inter-word interval, (short, long), with the sound level held constant. Results indicate that the sound clips with short word unit duration were given higher carefulness ratings than long word unit duration ( ps < .01). The results showed a similar pattern of ratings for the signal words as shown in research using print presentations. Implications for the design of voiced warnings are described.


PEDIATRICS ◽  
1975 ◽  
Vol 56 (4) ◽  
pp. 617-617
Author(s):  
Gōsta Blennow ◽  
Nils W. Svenningsen ◽  
Bengt Almquist

Recently we reported results from studies of incubator noise levels.1 It was found that in certain types of incubators the noise was considerable, and attention was called to the sound level in the construction of new incubators. Recently we had the opportunity to study an improved model of Isolette Infant Incubator Model C-86 where the mechanical noise from the electrically powered motor has been partially eliminated. With this modification it has been possible to lower the low-frequency sound levels to a certain degree in comparison to the levels registered in our study.


2018 ◽  
Vol 16 (1) ◽  
pp. 016003 ◽  
Author(s):  
Ben Somers ◽  
Eline Verschueren ◽  
Tom Francart

2016 ◽  
Vol 116 (6) ◽  
pp. 2550-2563 ◽  
Author(s):  
Calum Alex Grimsley ◽  
David Brian Green ◽  
Shobhana Sivaramakrishnan

The coding of sound level by ensembles of neurons improves the accuracy with which listeners identify how loud a sound is. In the auditory system, the rate at which neurons fire in response to changes in sound level is shaped by local networks. Voltage-gated conductances alter local output by regulating neuronal firing, but their role in modulating responses to sound level is unclear. We tested the effects of L-type calcium channels (CaL: CaV1.1–1.4) on sound-level coding in the central nucleus of the inferior colliculus (ICC) in the auditory midbrain. We characterized the contribution of CaL to the total calcium current in brain slices and then examined its effects on rate-level functions (RLFs) in vivo using single-unit recordings in awake mice. CaL is a high-threshold current and comprises ∼50% of the total calcium current in ICC neurons. In vivo, CaL activates at sound levels that evoke high firing rates. In RLFs that increase monotonically with sound level, CaL boosts spike rates at high sound levels and increases the maximum firing rate achieved. In different populations of RLFs that change nonmonotonically with sound level, CaL either suppresses or enhances firing at sound levels that evoke maximum firing. CaL multiplies the gain of monotonic RLFs with dynamic range and divides the gain of nonmonotonic RLFs with the width of the RLF. These results suggest that a single broad class of calcium channels activates enhancing and suppressing local circuits to regulate the sensitivity of neuronal populations to sound level.


2005 ◽  
Vol 24 (6) ◽  
pp. 33-37 ◽  
Author(s):  
Charlene Krueger ◽  
Susan Wall ◽  
Leslie Parker ◽  
Rose Nealis

Purpose: Elevated sound levels in the NICU may contribute to undesirable physiologic and behavioral effects in preterm infants. This study describes sound levels in a busy NICU in the southeastern U.S. and compares the findings with recommended NICU noise level standards.Design: NICU sound levels were recorded continuously at nine different locations within the NICU. Hourly measurements of loudness equivalent (Leq) sound level, sound level exceeded 10 percent of the time (L10), and maximum sound level (Lmax) were determined.Sample: Sound levels were sampled from nine different locations within the NICU.Main Outcome Variable: Sound levels are described using the hourly, A-weighted Leq, L10, and Lmax.Results: The overall average hourly Leq (M = 60.44 dB, range = 55–68 dB), L10 (M = 59.26 dB, range = 55–66 dB), and Lmax (M = 78.39, range = 69–93 dB) were often above the recommended sound levels (hourly Leq <50 dB, L10 <55 dB, and 1-second Lmax <70 dB). In addition, certain times of day, such as 6–7 AM and 10 AM–12 noon, were noisier than other times of day.


2018 ◽  
Author(s):  
Eline Verschueren ◽  
Ben Somers ◽  
Tom Francart

ABSTRACTThe speech envelope is essential for speech understanding and can be reconstructed from the electroencephalogram (EEG) recorded while listening to running speech. This so-called neural envelope tracking has been shown to relate to speech understanding in normal hearing listeners, but has barely been investigated in persons wearing cochlear implants (CI). We investigated the relation between speech understanding and neural envelope tracking in CI users.EEG was recorded in 8 CI users while they listened to a story. Speech understanding was varied by changing the intensity of the presented speech. The speech envelope was reconstructed from the EEG using a linear decoder and then correlated with the envelope of the speech stimulus as a measure of neural envelope tracking which was compared to actual speech understanding.This study showed that neural envelope tracking increased with increasing speech understanding in every participant. Furthermore behaviorally measured speech understanding was correlated with participant specific neural envelope tracking results indicating the potential of neural envelope tracking as an objective measure of speech understanding in CI users. This could enable objective and automatic fitting of CIs and pave the way towards closed-loop CIs that adjust continuously and automatically to individual CI users.


Sign in / Sign up

Export Citation Format

Share Document