The Implication of Sound Level on Spatial Selective Auditory Attention for Cochlear Implant Users: Behavioral and Electrophysiological Measurement

Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Chin-Tuan Tan
2021 ◽  
Vol 10 (14) ◽  
pp. 3078
Author(s):  
Sara Akbarzadeh ◽  
Sungmin Lee ◽  
Chin-Tuan Tan

In multi-speaker environments, cochlear implant (CI) users may attend to a target sound source in a different manner from normal hearing (NH) individuals during a conversation. This study attempted to investigate the effect of conversational sound levels on the mechanisms adopted by CI and NH listeners in selective auditory attention and how it affects their daily conversation. Nine CI users (five bilateral, three unilateral, and one bimodal) and eight NH listeners participated in this study. The behavioral speech recognition scores were collected using a matrix sentences test, and neural tracking to speech envelope was recorded using electroencephalography (EEG). Speech stimuli were presented at three different levels (75, 65, and 55 dB SPL) in the presence of two maskers from three spatially separated speakers. Different combinations of assisted/impaired hearing modes were evaluated for CI users, and the outcomes were analyzed in three categories: electric hearing only, acoustic hearing only, and electric + acoustic hearing. Our results showed that increasing the conversational sound level degraded the selective auditory attention in electrical hearing. On the other hand, increasing the sound level improved the selective auditory attention for the acoustic hearing group. In the NH listeners, however, increasing the sound level did not cause a significant change in the auditory attention. Our result implies that the effect of the sound level on selective auditory attention varies depending on the hearing modes, and the loudness control is necessary for the ease of attending to the conversation by CI users.


1996 ◽  
Vol 39 (5) ◽  
pp. 936-946 ◽  
Author(s):  
Melanie L. Matthies ◽  
Mario Svirsky ◽  
Joseph Perkell ◽  
Harlan Lane

The articulator positions of a subject with a cochlear implant were measured with an electromagnetic midsagittal articulometer (EMMA) system with and without auditory feedback available to the subject via his implant. Acoustic analysis of sibilant productions included specific measures of their spectral properties as well as the F 3 formant amplitude. More general postural characteristics of the utterances, such as speech rate and sound level, were measured as well. Because of the mechanical and aerodynamic interdependence of the articulators, the postural variables must be considered before attributing speech improvement to the selective correction of a phonemic target with the use of auditory feedback. The tongue blade position was related to the shape and central tendency of the /∫/ spectra; however, changes in the spectral contrast between /s/ and /∫/ were not related to changes in the more general postural variables of rate and sound level. These findings suggest that the cochlear implant is providing this subject with important auditory cues that he can use to monitor his speech and maintain the phonemic contrast between /s/ and /∫/.


Author(s):  
Irina Schierholz ◽  
Constanze Schönermark ◽  
Esther Ruigendijk ◽  
Andrej Kral ◽  
Bruno Kopp ◽  
...  

2021 ◽  
Vol 12 ◽  
Author(s):  
Michel Bürgel ◽  
Lorenzo Picinali ◽  
Kai Siedenburg

Listeners can attend to and track instruments or singing voices in complex musical mixtures, even though the acoustical energy of sounds from individual instruments may overlap in time and frequency. In popular music, lead vocals are often accompanied by sound mixtures from a variety of instruments, such as drums, bass, keyboards, and guitars. However, little is known about how the perceptual organization of such musical scenes is affected by selective attention, and which acoustic features play the most important role. To investigate these questions, we explored the role of auditory attention in a realistic musical scenario. We conducted three online experiments in which participants detected single cued instruments or voices in multi-track musical mixtures. Stimuli consisted of 2-s multi-track excerpts of popular music. In one condition, the target cue preceded the mixture, allowing listeners to selectively attend to the target. In another condition, the target was presented after the mixture, requiring a more “global” mode of listening. Performance differences between these two conditions were interpreted as effects of selective attention. In Experiment 1, results showed that detection performance was generally dependent on the target’s instrument category, but listeners were more accurate when the target was presented prior to the mixture rather than the opposite. Lead vocals appeared to be nearly unaffected by this change in presentation order and achieved the highest accuracy compared with the other instruments, which suggested a particular salience of vocal signals in musical mixtures. In Experiment 2, filtering was used to avoid potential spectral masking of target sounds. Although detection accuracy increased for all instruments, a similar pattern of results was observed regarding the instrument-specific differences between presentation orders. In Experiment 3, adjusting the sound level differences between the targets reduced the effect of presentation order, but did not affect the differences between instruments. While both acoustic manipulations facilitated the detection of targets, vocal signals remained particularly salient, which suggest that the manipulated features did not contribute to vocal salience. These findings demonstrate that lead vocals serve as robust attractor points of auditory attention regardless of the manipulation of low-level acoustical cues.


2016 ◽  
Vol 21 (Suppl. 1) ◽  
pp. 48-54 ◽  
Author(s):  
Feike de Graaff ◽  
Elke Huysmans ◽  
Obaid ur Rehman Qazi ◽  
Filiep J. Vanpoucke ◽  
Paul Merkus ◽  
...  

The number of cochlear implant (CI) users is increasing annually, resulting in an increase in the workload of implant centers in ongoing patient management and evaluation. Remote testing of speech recognition could be time-saving for both the implant centers as well as the patient. This study addresses two methodological challenges we encountered in the development of a remote speech recognition tool for adult CI users. First, we examined whether speech recognition in noise performance differed when the steady-state masking noise was presented throughout the test (i.e. continuous) instead of the standard clinical use for evaluation where the masking noise stops after each stimulus (i.e. discontinuous). A direct coupling between the audio port of a tablet computer to the accessory input of the sound processor with a personal audio cable was used. The setup was calibrated to facilitate presentation of stimuli at a predefined sound level. Finally, differences in frequency response between the audio cable and microphones were investigated.


1998 ◽  
Vol 104 (5) ◽  
pp. 3059-3069 ◽  
Author(s):  
Harlan Lane ◽  
Joseph Perkell ◽  
Jane Wozniak ◽  
Joyce Manzella ◽  
Peter Guiod ◽  
...  

2020 ◽  
Vol 63 (8) ◽  
pp. 2597-2608
Author(s):  
Emily N. Snell ◽  
Laura W. Plexico ◽  
Aurora J. Weaver ◽  
Mary J. Sandage

Purpose The purpose of this preliminary study was to identify a vocal task that could be used as a clinical indicator of the vocal aptitude or vocal fitness required for vocally demanding occupations in a manner similar to that of the anaerobic power tests commonly used in exercise science. Performance outcomes for vocal tasks that require rapid acceleration and high force production may be useful as an indirect indicator of muscle fiber complement and bioenergetic fitness of the larynx, an organ that is difficult to study directly. Method Sixteen women (age range: 19–24 years, M age = 22 years) were consented for participation and completed the following performance measures: forced vital capacity, three adapted vocal function tasks, and the horizontal sprint test. Results Using a within-participant correlational analyses, results indicated a positive relationship between the rate of the last second of a laryngeal diadochokinesis task that was produced at a high fundamental frequency/high sound level and anaerobic power. Forced vital capacity was not correlated with any of the vocal function tasks. Conclusions These preliminary results indicate that aspects of the laryngeal diadochokinesis task produced at a high fundamental frequency and high sound level may be useful as an ecologically valid measure of vocal power ability. Quantification of vocal power ability may be useful as a vocal fitness assessment or as an outcome measure for voice rehabilitation and habilitation for patients with vocally demanding jobs.


2020 ◽  
Vol 63 (12) ◽  
pp. 4325-4326 ◽  
Author(s):  
Hartmut Meister ◽  
Katrin Fuersen ◽  
Barbara Streicher ◽  
Ruth Lang-Roth ◽  
Martin Walger

Purpose The purpose of this letter is to compare results by Skuk et al. (2020) with Meister et al. (2016) and to point to a potential general influence of stimulus type. Conclusion Our conclusion is that presenting sentences may give cochlear implant recipients the opportunity to use timbre cues for voice perception. This might not be the case when presenting brief and sparse stimuli such as consonant–vowel–consonant or single words, which were applied in the majority of studies.


Sign in / Sign up

Export Citation Format

Share Document