Subjective Assessment of Cochlear Implant Users’ Signal-to-Noise Ratio Requirements for Different Levels of Wireless Device Usability

2014 ◽  
Vol 25 (10) ◽  
pp. 952-968 ◽  
Author(s):  
Stephen Julstrom ◽  
Linda Kozma-Spytek

Background: In order to better inform the development and revision of the American National Standards Institute C63.19 and American National Standards Institute/Telecommunications Industry Association-1083 hearing aid compatibility standards, a previous study examined the signal strength and signal (speech)-to-noise (interference) ratio needs of hearing aid users when using wireless and cordless phones in the telecoil coupling mode. This study expands that examination to cochlear implant (CI) users, in both telecoil and microphone modes of use. Purpose: The purpose of this study was to evaluate the magnetic and acoustic signal levels needed by CI users for comfortable telephone communication and the users’ tolerance relative to the speech levels of various interfering wireless communication–related noise types. Research Design: Design was a descriptive and correlational study. Simulated telephone speech and eight interfering noise types presented as continuous signals were linearly combined and were presented together either acoustically or magnetically to the participants’ CIs. The participants could adjust the loudness of the telephone speech and the interfering noises based on several assigned criteria. Study Sample: The 21 test participants ranged in age from 23–81 yr. All used wireless phones with their CIs, and 15 also used cordless phones at home. There were 12 participants who normally used the telecoil mode for telephone communication, whereas 9 used the implant’s microphone; all were tested accordingly. Data Collection and Analysis: A guided-intake questionnaire yielded general background information for each participant. A custom-built test control box fed by prepared speech-and-noise files enabled the tester or test participant, as appropriate, to switch between the various test signals and to precisely control the speech-and-noise levels independently. The tester, but not the test participant, could read and record the selected levels. Subsequent analysis revealed the preferred speech levels, speech (signal)-to-noise ratios, and the effect of possible noise-measurement weighting functions. Results: The participants' preferred telephone speech levels subjectively matched or were somewhat lower than the level that they heard from a 65 dB SPL wideband reference. The mean speech (signal)-to-noise ratio requirement for them to consider their telephone experience “acceptable for normal use” was 20 dB, very similar to the results for the hearing aid users of the previous study. Significant differences in the participants’ apparent levels of noise tolerance among the noise types when the noise level was determined using A-weighting were eliminated when a CI-specific noise-measurement weighting was applied. Conclusions: The results for the CI users in terms of both preferred levels for wireless and cordless phone communication and signal-to-noise requirements closely paralleled the corresponding results for hearing aid users from the previous study, and showed no significant differences between the microphone and telecoil modes of use. Signal-to-noise requirements were directly related to the participants’ noise audibility threshold and were independent of noise type when appropriate noise-measurement weighting was applied. Extending the investigation to include noncontinuous interfering noises and forms of radiofrequency interference other than additive audiofrequency noise could be areas of future study.

Author(s):  
Mourad Talbi ◽  
Med Salim Bouhlel

Background: In this paper, we propose a secure image watermarking technique which is applied to grayscale and color images. It consists in applying the SVD (Singular Value Decomposition) in the Lifting Wavelet Transform domain for embedding a speech image (the watermark) into the host image. Methods: It also uses signature in the embedding and extraction steps. Its performance is justified by the computation of PSNR (Pick Signal to Noise Ratio), SSIM (Structural Similarity), SNR (Signal to Noise Ratio), SegSNR (Segmental SNR) and PESQ (Perceptual Evaluation Speech Quality). Results: The PSNR and SSIM are used for evaluating the perceptual quality of the watermarked image compared to the original image. The SNR, SegSNR and PESQ are used for evaluating the perceptual quality of the reconstructed or extracted speech signal compared to the original speech signal. Conclusion: The Results obtained from computation of PSNR, SSIM, SNR, SegSNR and PESQ show the performance of the proposed technique.


2020 ◽  
Vol 24 ◽  
pp. 233121652097034
Author(s):  
Florian Langner ◽  
Andreas Büchner ◽  
Waldo Nogueira

Cochlear implant (CI) sound processing typically uses a front-end automatic gain control (AGC), reducing the acoustic dynamic range (DR) to control the output level and protect the signal processing against large amplitude changes. It can also introduce distortions into the signal and does not allow a direct mapping between acoustic input and electric output. For speech in noise, a reduction in DR can result in lower speech intelligibility due to compressed modulations of speech. This study proposes to implement a CI signal processing scheme consisting of a full acoustic DR with adaptive properties to improve the signal-to-noise ratio and overall speech intelligibility. Measurements based on the Short-Time Objective Intelligibility measure and an electrodogram analysis, as well as behavioral tests in up to 10 CI users, were used to compare performance with a single-channel, dual-loop, front-end AGC and with an adaptive back-end multiband dynamic compensation system (Voice Guard [VG]). Speech intelligibility in quiet and at a +10 dB signal-to-noise ratio was assessed with the Hochmair–Schulz–Moser sentence test. A logatome discrimination task with different consonants was performed in quiet. Speech intelligibility was significantly higher in quiet for VG than for AGC, but intelligibility was similar in noise. Participants obtained significantly better scores with VG than AGC in the logatome discrimination task. The objective measurements predicted significantly better performance estimates for VG. Overall, a dynamic compensation system can outperform a single-stage compression (AGC + linear compression) for speech perception in quiet.


2002 ◽  
Vol 13 (01) ◽  
pp. 038-049 ◽  
Author(s):  
Gabrielle H. Saunders ◽  
Kathleen M. Cienkowski

Measurement of hearing aid outcome is particularly difficult because there are numerous dimensions to consider (e.g., performance, satisfaction, benefit). Often there are discrepancies between scores in these dimensions. It is difficult to reconcile these discrepancies because the materials and formats used to measure each dimension are so very different. We report data obtained with an outcome measure that examines both objective and subjective dimensions with the same test format and materials and gives results in the same unit of measurement (signal-to-noise ratio). Two variables are measured: a “performance” speech reception threshold and a “perceptual” speech reception threshold. The signal-to-noise ratio difference between these is computed to determine the perceptual-performance discrepancy (PPDIS). The results showed that, on average, 48 percent of the variance in subjective ratings of a hearing aid could be explained by a combination of the performance speech reception threshold and the PPDIS. These findings suggest that the measure is potentially a valuable clinical tool.


2020 ◽  
Vol 24 ◽  
pp. 233121652093339
Author(s):  
Els Walravens ◽  
Gitte Keidser ◽  
Louise Hickson

Trainable hearing aids let users fine-tune their hearing aid settings in their own listening environment: Based on consistent user-adjustments and information about the acoustic environment, the trainable aids will change environment-specific settings to the user’s preference. A requirement for effective fine-tuning is consistency of preference for similar settings in similar environments. The aim of this study was to evaluate consistency of preference for settings differing in intensity, gain-frequency slope, and directionality when listening in simulated real-world environments and to determine if participants with more consistent preferences could be identified based on profile measures. A total of 52 adults (63–88 years) with hearing varying from normal to a moderate sensorineural hearing loss selected their preferred setting from pairs differing in intensity (3 or 6 dB), gain-frequency slope (±1.3 or ± 2.7 dB/octave), or directionality (omnidirectional vs. cardioid) in four simulated real-world environments: traffic noise, a monologue in traffic noise at 5 dB signal-to-noise ratio, and a dialogue in café noise at 5 and at 0 dB signal-to-noise ratio. Forced-choice comparisons were made 10 times for each combination of pairs of settings and environment. Participants also completed nine psychoacoustic, cognitive, and personality measures. Consistency of preference, defined by a setting preferred at least 9 out of 10 times, varied across participants. More participants obtained consistent preferences for larger differences between settings and less difficult environments. The profile measures did not predict consistency of preference. Trainable aid users could benefit from counselling to ensure realistic expectations for particular adjustments and listening situations.


2019 ◽  
Vol 28 (1) ◽  
pp. 101-113 ◽  
Author(s):  
Jenna M. Browning ◽  
Emily Buss ◽  
Mary Flaherty ◽  
Tim Vallier ◽  
Lori J. Leibold

Purpose The purpose of this study was to evaluate speech-in-noise and speech-in-speech recognition associated with activation of a fully adaptive directional hearing aid algorithm in children with mild to severe bilateral sensory/neural hearing loss. Method Fourteen children (5–14 years old) who are hard of hearing participated in this study. Participants wore laboratory hearing aids. Open-set word recognition thresholds were measured adaptively for 2 hearing aid settings: (a) omnidirectional (OMNI) and (b) fully adaptive directionality. Each hearing aid setting was evaluated in 3 listening conditions. Fourteen children with normal hearing served as age-matched controls. Results Children who are hard of hearing required a more advantageous signal-to-noise ratio than children with normal hearing to achieve comparable performance in all 3 conditions. For children who are hard of hearing, the average improvement in signal-to-noise ratio when comparing fully adaptive directionality to OMNI was 4.0 dB in noise, regardless of target location. Children performed similarly with fully adaptive directionality and OMNI settings in the presence of the speech maskers. Conclusions Compared to OMNI, fully adaptive directionality improved speech recognition in steady noise for children who are hard of hearing, even when they were not facing the target source. This algorithm did not affect speech recognition when the background noise was speech. Although the use of hearing aids with fully adaptive directionality is not proposed as a substitute for remote microphone systems, it appears to offer several advantages over fixed directionality, because it does not depend on children facing the target talker and provides access to multiple talkers within the environment. Additional experiments are required to further evaluate children's performance under a variety of spatial configurations in the presence of both noise and speech maskers.


2021 ◽  
pp. 019459982110492
Author(s):  
Allan M. Henslee ◽  
Christopher R. Kaufmann ◽  
Matt D. Andrick ◽  
Parker T. Reineke ◽  
Viral D. Tejani ◽  
...  

Objective Electrocochleography (ECochG) is increasingly being used during cochlear implant (CI) surgery to detect and mitigate insertion-related intracochlear trauma, where a drop in ECochG signal has been shown to correlate with a decline in hearing outcomes. In this study, an ECochG-guided robotics-assisted CI insertion system was developed and characterized that provides controlled and consistent electrode array insertions while monitoring and adapting to real-time ECochG signals. Study Design Experimental research. Setting A research laboratory and animal testing facility. Methods A proof-of-concept benchtop study evaluated the ability of the system to detect simulated ECochG signal changes and robotically adapt the insertion. Additionally, the ECochG-guided insertion system was evaluated in a pilot in vivo sheep study to characterize the signal-to-noise ratio and amplitude of ECochG recordings during robotics-assisted insertions. The system comprises an electrode array insertion drive unit, an extracochlear recording electrode module, and a control console that interfaces with both components and the surgeon. Results The system exhibited a microvolt signal resolution and a response time <100 milliseconds after signal change detection, indicating that the system can detect changes and respond faster than a human. Additionally, animal results demonstrated that the system was capable of recording ECochG signals with a high signal-to-noise ratio and sufficient amplitude. Conclusion An ECochG-guided robotics-assisted CI insertion system can detect real-time drops in ECochG signals during electrode array insertions and immediately alter the insertion motion. The system may provide a surgeon the means to monitor and reduce CI insertion–related trauma beyond manual insertion techniques for improved CI hearing outcomes.


2021 ◽  
pp. 2784-2795
Author(s):  
Esraa Abd Alsalam ◽  
Shaymaa Ahmed Razoqi ◽  
Eman Fathi Ahmed

Compression of speech signal is an essential field in signal processing. Speech compression is very important in today’s world, due to the limited bandwidth transmission and storage capacity. This paper explores a Contourlet transformation based methodology for the compression of the speech signal. In this methodology, the speech signal is analysed using Contourlet transformation coefficients with statistic methods as threshold values, such as Interquartile Filter (IQR), Average Absolute Deviation (AAD), Median Absolute Deviation (MAD) and standard deviation (STD), followed by the application of (Run length encoding) They are exploited for recording speech in different times (5, 30, and 120 seconds). A comparative study of performance of different transforms is made in terms of (Signal to Noise Ratio,Peak Signal to Noise Ratio,Normalized Cross-Correlation, Normalized Cross-Correlation) and the compression ratio (CR). The best stable result of implementing our algorithm for compressing speech is at level1 with   AAD or MAD, adopting Matlab 2013a language.


Sign in / Sign up

Export Citation Format

Share Document