scholarly journals Hearing in Noise: The Importance of Coding Strategies—Normal-Hearing Subjects and Cochlear Implant Users

2019 ◽  
Vol 9 (4) ◽  
pp. 734 ◽  
Author(s):  
Pierre-Antoine Cucis ◽  
Christian Berger-Vachon ◽  
Ruben Hermann ◽  
Fabien Millioz ◽  
Eric Truy ◽  
...  

Two schemes are mainly used for coding sounds in cochlear implants: Fixed-Channel and Channel-Picking. This study aims to determine the speech audiometry scores in noise of people using either type of sound coding scheme. Twenty normal-hearing and 45 cochlear implant subjects participated in this experiment. Both populations were tested by using dissyllabic words mixed with cocktail-party noise. A cochlear implant simulator was used to test the normal-hearing subjects. This simulator separated the sound into 20 spectral channels and the eight most energetic were selected to simulate the Channel-Picking strategy. For normal-hearing subjects, we noticed higher scores with the Fixed-Channel strategy than with the Channel-Picking strategy in the mid-range signal-to-noise ratios (0 to +6 dB). For cochlear implant users, no differences were found between the two coding schemes but we could see a slight advantage for the Fixed-Channel strategies over the Channel-Picking strategies. For both populations, a difference was observed for the signal-to-noise ratios at 50% of the maximum recognition plateau in favour of the Fixed-Channel strategy. To conclude, in the most common signal-to-noise ratio conditions, a Fixed-Channel coding strategy may lead to better recognition percentages than a Channel-Picking strategy. Further studies are indicated to confirm this.

2015 ◽  
Vol 26 (06) ◽  
pp. 572-581 ◽  
Author(s):  
Stanley Sheft ◽  
Min-Yu Cheng ◽  
Valeriy Shafiro

Background: Past work has shown that low-rate frequency modulation (FM) may help preserve signal coherence, aid segmentation at word and syllable boundaries, and benefit speech intelligibility in the presence of a masker. Purpose: This study evaluated whether difficulties in speech perception by cochlear implant (CI) users relate to a deficit in the ability to discriminate among stochastic low-rate patterns of FM. Research Design: This is a correlational study assessing the association between the ability to discriminate stochastic patterns of low-rate FM and the intelligibility of speech in noise. Study Sample: Thirteen postlingually deafened adult CI users participated in this study. Data Collection and Analysis: Using modulators derived from 5-Hz lowpass noise applied to a 1-kHz carrier, thresholds were measured in terms of frequency excursion both in quiet and with a speech-babble masker present, stimulus duration, and signal-to-noise ratio in the presence of a speech-babble masker. Speech perception ability was assessed in the presence of the same speech-babble masker. Relationships were evaluated with Pearson product–moment correlation analysis with correction for family-wise error, and commonality analysis to determine the unique and common contributions across psychoacoustic variables to the association with speech ability. Results: Significant correlations were obtained between masked speech intelligibility and three metrics of FM discrimination involving either signal-to-noise ratio or stimulus duration, with shared variance among the three measures accounting for much of the effect. Compared to past results from young normal-hearing adults and older adults with either normal hearing or a mild-to-moderate hearing loss, mean FM discrimination thresholds obtained from CI users were higher in all conditions. Conclusions: The ability to process the pattern of frequency excursions of stochastic FM may, in part, have a common basis with speech perception in noise. Discrimination of differences in the temporally distributed place coding of the stimulus could serve as this common basis for CI users.


2020 ◽  
Author(s):  
Tom Gajęcki ◽  
Waldo Nogueira

Normal hearing listeners have the ability to exploit the audio input perceived by each ear to extract target information in challenging listening scenarios. Bilateral cochlear implant (BiCI) users, however, do not benefit as much as normal hearing listeners do from a bilateral input. In this study, we investigate the effect that bilaterally linked band selection, bilaterally synchronized electrical stimulation and ideal binary masks (IdBMs) have on the ability of 10 BiCIs to understand speech in background noise. The performance was assessed through a sentence-based speech intelligibility test, in a scenario where the speech signal was presented from the front and the interfering noise from one side. The linked band selection relies on the most favorable signal-to-noise-ratio (SNR) ear, which will select the bands to be stimulated for both CIs. Results show that no benefit from adding a second CI to the most favorable SNR side was achieved for any of the tested bilateral conditions. However, when using both devices, speech perception results show that performing linked band selection, besides delivering bilaterally synchronized electrical stimulation, leads to an improvement compared to standard clinical setups. Moreover, the outcomes of this work show that by applying IdBMs, subjects achieve speech intelligibility scores similar to the ones without background noise.


2020 ◽  
Vol 24 ◽  
pp. 233121652097034
Author(s):  
Florian Langner ◽  
Andreas Büchner ◽  
Waldo Nogueira

Cochlear implant (CI) sound processing typically uses a front-end automatic gain control (AGC), reducing the acoustic dynamic range (DR) to control the output level and protect the signal processing against large amplitude changes. It can also introduce distortions into the signal and does not allow a direct mapping between acoustic input and electric output. For speech in noise, a reduction in DR can result in lower speech intelligibility due to compressed modulations of speech. This study proposes to implement a CI signal processing scheme consisting of a full acoustic DR with adaptive properties to improve the signal-to-noise ratio and overall speech intelligibility. Measurements based on the Short-Time Objective Intelligibility measure and an electrodogram analysis, as well as behavioral tests in up to 10 CI users, were used to compare performance with a single-channel, dual-loop, front-end AGC and with an adaptive back-end multiband dynamic compensation system (Voice Guard [VG]). Speech intelligibility in quiet and at a +10 dB signal-to-noise ratio was assessed with the Hochmair–Schulz–Moser sentence test. A logatome discrimination task with different consonants was performed in quiet. Speech intelligibility was significantly higher in quiet for VG than for AGC, but intelligibility was similar in noise. Participants obtained significantly better scores with VG than AGC in the logatome discrimination task. The objective measurements predicted significantly better performance estimates for VG. Overall, a dynamic compensation system can outperform a single-stage compression (AGC + linear compression) for speech perception in quiet.


2017 ◽  
Vol 6 (4) ◽  
pp. 116 ◽  
Author(s):  
Wessam Mostafa ◽  
Eman Mohamed ◽  
Abdelhalim Zekry

Long Term Evolution Advanced (LTE-A) is the evolution of the LTE that developed by 3rd Generation Partnership Project (3GPP).LTE-A exceeded International Telecommunication Union (ITU) requirements for 4th Generation (4G) known as International Mobile Telecommunications (IMT-Advanced). It is formally introduced in October 2009. This paper presents a study and an implementation of the LTE-A downlink physical layer based on 3GPP release 10 standards using Matlab simulink. In addition, it provides the LTE-A performance in terms of Bit Error Rate (BER) against Signal to Noise Ratio (SNR) for different modulation and channel coding schemes. Moreover, different scenarios of Carrier Aggregation (CA) are modeled and implemented. The Simulink model developed for the LTE-A transceiver can be translated into digital signal processor DSP code or VHDL on FPGA code.


2004 ◽  
Vol 116 (4) ◽  
pp. 2395-2405 ◽  
Author(s):  
Mead C. Killion ◽  
Patricia A. Niquette ◽  
Gail I. Gudmundsen ◽  
Lawrence J. Revit ◽  
Shilpi Banerjee

2015 ◽  
Vol 24 (4) ◽  
pp. 477-486 ◽  
Author(s):  
Douglas P. Sladen ◽  
Todd. A. Ricketts

Purpose Several studies have been devoted to understanding the frequency information available to adult users of cochlear implants when listening in quiet. The objective of this study was to construct frequency importance functions for a group of adults with cochlear implants and a group of adults with normal hearing both in quiet and in a +10 dB signal-to-noise ratio. Method Two groups of adults, 1 with cochlear implants and 1 with normal hearing, were asked to identify nonsense syllables in quiet and in the presence of 6-talker babble while “holes” were systematically created in the speech spectrum. Frequency importance functions were constructed. Results Results showed that adults with normal hearing placed greater weight on bands 1, 3, and 4 than on bands 2, 5, and 6, whereas adults with cochlear implants placed equal weight on all bands. The frequency importance functions for each group did not differ between listening in quiet and listening in noise. Conclusions Adults with cochlear implants assign perceptual weight toward different frequency bands, though the weight assignment does not differ between quiet and noisy conditions. Generalizing these results to the broader population of adults with implants is constrained by a small sample size.


2005 ◽  
Vol 48 (5) ◽  
pp. 1165-1186 ◽  
Author(s):  
Tracy S. Fitzgerald ◽  
Beth A. Prieve

Although many distortion-product otoacoustic emissions (DPOAEs) may be measured in the ear canal in response to 2 pure tone stimuli, the majority of clinical studies have focused exclusively on the DPOAE at the frequency 2f1-f2. This study investigated another DPOAE, 2f2-f1, in an attempt to determine the following: (a) the optimal stimulus parameters for its clinical measurement and (b) its utility in differentiating between normal-hearing and hearing-impaired ears at low-to-mid frequencies (≤2000 Hz) when measured either alone or in conjunction with the 2f1-f2 DPOAE. Two experiments were conducted. In Experiment 1, the effects of primary level, level separation, and frequency separation (f2/f1) on 2f2-f1 DPOAE level were evaluated in normal-hearing ears for low-to-mid f2 frequencies (700–2000 Hz). Moderately high-level primaries (60–70 dB SPL) presented at equal levels or with f2 slightly higher than f1 produced the highest 2f2-f1 DPOAE levels. When the f2/f1 ratio that produced the highest 2f2-f1 DPOAE levels was examined across participants, the mean optimal f2/f1 ratio across f2 frequencies and primary level separations was 1.08. In Experiment 2, the accuracy with which DPOAE level or signal-to-noise ratio identified hearing status at the f2 frequency as normal or impaired was evaluated using clinical decision analysis. The 2f2-f1 and 2f1-f2 DPOAEs were measured from both normal-hearing and hearing-impaired ears using 2 sets of stimulus parameters: (a) the traditional parameters for measuring the 2f1-f2 DPOAE (f2/f1 = 1.22; L1, L2 = 65, 55 dB SPL) and (b) the new parameters that were deemed optimal for the 2f2-f1 DPOAE in Experiment 1 (f2/f1 = 1.073, L1 and L2 = 65 dB SPL). Identification of hearing status using 2f2-f1 DPOAE level and signal-to-noise ratio was more accurate when the new stimulus parameters were used compared with the results achieved when the 2f2-f1 DPOAE was recorded using the traditional parameters. However, identification of hearing status was less accurate for the 2f2-f1 DPOAE measured using the new parameters than for the 2f1-f2 DPOAE measured using the traditional parameters. No statistically significant improvements in test performance were achieved when the information from the 2 DPOAEs was combined, either by summing the DPOAE levels or by using logistic regression analysis.


Sign in / Sign up

Export Citation Format

Share Document