Performance of a Fully Adaptive Directional Microphone to Signals Presented from Various Azimuths

2005 ◽  
Vol 16 (06) ◽  
pp. 333-347 ◽  
Author(s):  
Francis Kuk ◽  
Denise Keenan ◽  
Chi-Chuen Lau ◽  
Carl Ludvigsen

The signal-to-noise ratio advantage of a directional microphone is achieved by reducing the sensitivity of the microphone to sounds from the sides and back. A fully adaptive directional microphone (one that automatically switches between an omnidirectional mode and various directional polar patterns) may allow the achievement of signal-to-noise (SNR) improvement with minimal loss on audibility to sounds that originate from the sides and back. To demonstrate such possibilities, this study compared the soundfield aided thresholds, speech in quiet at different input levels, and speech in noise performance of 17 hearing-impaired participants under three microphone modes (omnidirectional, fixed hypercardioid, and fully [or automatic] adaptive) as the stimuli were presented from 0° to 180° in 45° intervals. The results showed a significant azimuth effect only with the fixed directional microphone. In quiet, the fully adaptive microphone performed similarly as the omnidirectional microphone at all frequencies, input levels, and azimuths. In noise, the fully adaptive microphone achieved similar SNR improvement as the fixed directional microphone. Clinical implications of the results of this study were discussed.

2004 ◽  
Vol 116 (4) ◽  
pp. 2395-2405 ◽  
Author(s):  
Mead C. Killion ◽  
Patricia A. Niquette ◽  
Gail I. Gudmundsen ◽  
Lawrence J. Revit ◽  
Shilpi Banerjee

1980 ◽  
Vol 23 (3) ◽  
pp. 603-613 ◽  
Author(s):  
Robert H. Margolis ◽  
Seth M. Goldberg

Auditory frequency selectivity was inferred from measurements of the detectability of tonal signals as a function of the cutoff frequency of a low-pass computer-generated noise masker. In Experiment I the effect of small changes in signal-to-noise ratio on inferred auditory frequency selectivity was studied. In Experiment II, frequency selectivity was determined for five normal-hearing subjects and four subjects with sensorineural hearing loss due to presbycusis. Critical ratios (signal-to-noise ratio at masked threshold) also were determined in Experiment II. The results of Experiment I indicate that the low-pass masking experiment provides a stable estimate of the width, but not the position, of the critical masking band. Experiment II revealed elevated critical ratios for three of the four presbycusic subjects. Some hearing-impaired subjects appeared to have normal frequency selectivity despite elevated critical ratios. Other presbycusic subjects demonstrated impaired auditory frequency selectivity. The results suggest that critical ratio and critical masking band data are free to vary independently in hearing-impaired subjects.


2010 ◽  
Vol 128 (4) ◽  
pp. 2426-2426
Author(s):  
Peggy B. Nelson ◽  
Yingjiu Nie ◽  
Elizabeth Crump Anderson ◽  
Bhagyashree Katare

2005 ◽  
Vol 48 (5) ◽  
pp. 1165-1186 ◽  
Author(s):  
Tracy S. Fitzgerald ◽  
Beth A. Prieve

Although many distortion-product otoacoustic emissions (DPOAEs) may be measured in the ear canal in response to 2 pure tone stimuli, the majority of clinical studies have focused exclusively on the DPOAE at the frequency 2f1-f2. This study investigated another DPOAE, 2f2-f1, in an attempt to determine the following: (a) the optimal stimulus parameters for its clinical measurement and (b) its utility in differentiating between normal-hearing and hearing-impaired ears at low-to-mid frequencies (≤2000 Hz) when measured either alone or in conjunction with the 2f1-f2 DPOAE. Two experiments were conducted. In Experiment 1, the effects of primary level, level separation, and frequency separation (f2/f1) on 2f2-f1 DPOAE level were evaluated in normal-hearing ears for low-to-mid f2 frequencies (700–2000 Hz). Moderately high-level primaries (60–70 dB SPL) presented at equal levels or with f2 slightly higher than f1 produced the highest 2f2-f1 DPOAE levels. When the f2/f1 ratio that produced the highest 2f2-f1 DPOAE levels was examined across participants, the mean optimal f2/f1 ratio across f2 frequencies and primary level separations was 1.08. In Experiment 2, the accuracy with which DPOAE level or signal-to-noise ratio identified hearing status at the f2 frequency as normal or impaired was evaluated using clinical decision analysis. The 2f2-f1 and 2f1-f2 DPOAEs were measured from both normal-hearing and hearing-impaired ears using 2 sets of stimulus parameters: (a) the traditional parameters for measuring the 2f1-f2 DPOAE (f2/f1 = 1.22; L1, L2 = 65, 55 dB SPL) and (b) the new parameters that were deemed optimal for the 2f2-f1 DPOAE in Experiment 1 (f2/f1 = 1.073, L1 and L2 = 65 dB SPL). Identification of hearing status using 2f2-f1 DPOAE level and signal-to-noise ratio was more accurate when the new stimulus parameters were used compared with the results achieved when the 2f2-f1 DPOAE was recorded using the traditional parameters. However, identification of hearing status was less accurate for the 2f2-f1 DPOAE measured using the new parameters than for the 2f1-f2 DPOAE measured using the traditional parameters. No statistically significant improvements in test performance were achieved when the information from the 2 DPOAEs was combined, either by summing the DPOAE levels or by using logistic regression analysis.


1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


1993 ◽  
Vol 2 (2) ◽  
pp. 47-51 ◽  
Author(s):  
Edgar Villchur

Hearing aid design to alleviate the noise problem has concentrated on improving the signal-to-noise ratio with the aid, using devices such as directional microphones, adaptive filters, and circuits that discriminate between steady-state noise and speech. The design approach discussed here is directed at improving the speech recognition of hearing-impaired listeners at a given signal-to-noise ratio, by restoring to their perception speech cues they no longer hear because of their impairment. This allows them to retain more of the redundant information in speech after masking has taken its toll, and empowers their ability to separate desired from undesired signals (what Broadbent calls "selective listening" in persons with normal hearing). Experimental results are presented.


2018 ◽  
Vol 27 (1) ◽  
pp. 95-103
Author(s):  
Adriana Goyette ◽  
Jeff Crukley ◽  
Jason Galster

Purpose Directional microphone systems are typically used to improve hearing aid users' understanding of speech in noise. However, directional microphones also increase internal hearing aid noise. The purpose of this study was to investigate how varying directional microphone bandwidth affected listening preference and speech-in-noise performance. Method Ten participants with normal hearing and 10 participants with hearing impairment compared internal noise levels between hearing aid memories with 4 different microphone modes: omnidirectional, full directional, high-frequency directionality with directional processing above 900 Hz, and high-frequency directionality with directional processing above 2000 Hz. Speech-in-noise performance was measured with each memory for the participants with hearing impairment. Results Participants with normal hearing preferred memories with less directional bandwidth. Participants with hearing impairment also tended to prefer the memories with less directional bandwidth. However, the majority of participants with hearing impairment did not indicate a preference between omnidirectional and directional above 2000 Hz memories. Average hearing-in-noise performance improved with increasing directional bandwidth. Conclusions Most participants preferred memories with less directional bandwidth in quiet. Participants with hearing impairment indicated no difference in preference between directional above 2000 Hz and the omnidirectional memories. Speech recognition in noise performance improved with increasing directional bandwidth.


Sign in / Sign up

Export Citation Format

Share Document