The influence of presentation level and relative amplitude on the perception of place of articulation of stop consonants in normal hearing and hearing‐impaired listeners

1992 ◽  
Vol 92 (4) ◽  
pp. 2464-2464
Author(s):  
Mark Hedrick ◽  
Laura Schulte
1997 ◽  
Vol 40 (4) ◽  
pp. 925-938 ◽  
Author(s):  
Mark Hedrick

Previous studies have shown that manipulation of frication amplitude relative to vowel amplitude in the third formant frequency region affects labeling of place of articulation for the fricative contrast /s/- /∫/(Hedrick & Ohde, 1993; Stevens, 1985). The current study examined the influence of this relative amplitude manipulation in conjunction with presentation level, frication duration, and formant transition cues for labeling fricative place of articulation by listeners with normal hearing and listeners with sensorineural hearing loss. Synthetic consonant-vowel (CV) stimuli were used in which the amplitude of the frication relative to vowel onset amplitude in the third formant frequency region was manipulated across a 20 dB range. The listeners with hearing loss appeared to have more difficulty using the formant transition component than the relative amplitude component for the labeling task than most listeners with normal hearing. A second experiment was performed with the same stimuli in which the listeners were given one additional labeling response alternative, the affricate /t∫/. Results from this experiment showed that listeners with normal hearing gave more /t∫/labels as relative amplitude and presentation level increased and frication duration decreased. There was a significant difference between the two groups in the number of affricate responses, as listeners with hearing loss gave fewer /t∫/labels.


1992 ◽  
Vol 35 (4) ◽  
pp. 942-949 ◽  
Author(s):  
Christopher W. Turner ◽  
David A. Fabry ◽  
Stephanie Barrett ◽  
Amy R. Horwitz

This study examined the possibility that hearing-impaired listeners, in addition to displaying poorer-than-normal recognition of speech presented in background noise, require a larger signal-to-noise ratio for the detection of the speech sounds. Psychometric functions for the detection and recognition of stop consonants were obtained from both normal-hearing and hearing-impaired listeners. Expressing the speech levels in terms of their short-term spectra, the detection of consonants for both subject groups occurred at the same signal-to-noise ratio. In contrast, the hearing-impaired listeners displayed poorer recognition performance than the normal-hearing listeners. These results imply that the higher signal-to-noise ratios required for a given level of recognition by some subjects with hearing loss are not due in part to a deficit in detection of the signals in the masking noise, but rather are due exclusively to a deficit in recognition.


2010 ◽  
Vol 21 (08) ◽  
pp. 493-511
Author(s):  
Amanda J. Ortmann ◽  
Catherine V. Palmer ◽  
Sheila R. Pratt

Background: A possible voicing cue used to differentiate voiced and voiceless cognate pairs is envelope onset asynchrony (EOA). EOA is the time between the onsets of two frequency bands of energy (in this study one band was high-pass filtered at 3000 Hz, the other low-pass filtered at 350 Hz). This study assessed the perceptual impact of manipulating EOA on voicing perception of initial stop consonants, and whether normal-hearing and hearing-impaired listeners were sensitive to changes in EOA as a cue for voicing. Purpose: The purpose of this study was to examine the effect of spectrally asynchronous auditory delay on the perception of voicing associated with initial stop consonants by normal-hearing and hearing-impaired listeners. Research Design: Prospective experimental study comparing the perceptual differences of manipulating the EOA cues for two groups of listeners. Study Sample: Thirty adults between the ages of 21 and 60 yr completed the study: 17 listeners with normal hearing and 13 listeners with mild-moderate sensorineural hearing loss. Data Collection and Analysis: The participants listened to voiced and voiceless stop consonants within a consonant-vowel syllable structure. The EOA of each syllable was varied along a continuum, and identification and discrimination tasks were used to determine if the EOA manipulation resulted in categorical shifts in voicing perception. In the identification task the participants identified the consonants as belonging to one of two categories (voiced or voiceless cognate). They also completed a same-different discrimination task with the same set of stimuli. Categorical perception was confirmed with a d-prime sensitivity measure by examining how accurately the results from the identification task predicted the discrimination results. The influence of EOA manipulations on the perception of voicing was determined from shifts in the identification functions and discrimination peaks along the EOA continuum. The two participant groups were compared in order to determine the impact of EOA on voicing perception as a function of syllable and hearing status. Results: Both groups of listeners demonstrated a categorical shift in voicing perception with manipulation of EOA for some of the syllables used in this study. That is, as the temporal onset asynchrony between low- and high-frequency bands of speech was manipulated, the listeners' perception of consonant voicing changed between voiced and voiceless categories. No significant differences were found between listeners with normal hearing and listeners with hearing loss as a result of the EOA manipulation. Conclusions: The results of this study suggested that both normal-hearing and hearing-impaired listeners likely use spectrally asynchronous delays found in natural speech as a cue for voicing distinctions. While delays in modern hearing aids are less than those used in this study, possible implications are that additional asynchronous delays from digital signal processing or open-fitting amplification schemes might cause listeners with hearing loss to misperceive voicing cues.


2001 ◽  
Vol 44 (5) ◽  
pp. 964-974 ◽  
Author(s):  
Mark Hedrick ◽  
Mary Sue Younger

The current study explored the changes in weighting of relative amplitude and formant transition cues that may be caused by a K-amp circuit. Twelve listeners with normal hearing and 3 listeners with sensorineural hearing loss labeled the stop consonant place of articulation of synthetic consonant-vowel stimuli. Within the stimuli, two acoustic cues were varied: the frequency of the onset of the second and third formant (F2/F3) transitions and the relative amplitude between the consonant burst and the following vowel in the fourth and fifth formant (F4/ F5) frequency region. The variation in the two cues ranged from values appropriate for a voiceless labial stop consonant to a voiceless alveolar stop consonant. The listeners labeled both the unaided stimuli and the stimuli recorded through a hearing aid with a K-amp circuit. An analysis of variance (ANOVA) model was used to calculate the perceptual weight given each cue. Data from listeners with normal hearing show a change in relative weighting of cues between aided and unaided stimuli. Pilot data from the listeners with hearing loss show a more varied pattern, with more weight placed on relative amplitude. These results suggest that calculation of perceptual weights using an ANOVA model may be worthwhile in future studies examining the relationship between acoustic information presented by a hearing aid and the subsequent perception by the listener with hearing loss.


1982 ◽  
Vol 25 (4) ◽  
pp. 600-607 ◽  
Author(s):  
Andre-Pierre Benguerel ◽  
Margaret Kathleen Pichora-Fuller

Normal-hearing and hearing-impaired subjects with good lipreading skills lipread videotaped material under visual-only conditions. V 1 CV 2 utterances were used where V could he /i/, /æ/ or/u/ and C could be /p/, /t/, /k/, /t∫/, /f/, /Θ/, /s/, /∫/ or/w/.Coarticulatory effects were present in these stimuli. The influence of phonetic context on lipreading scores for each V and C was analyzed in an effort to explain some of the variability in the visual perception of phonemes which was suggested by existing literature. Transmission of information for four phonetic features was also analyzed. Lipreading performance was nearly perfect for/p/,/f7,/w/,/Θ/and/u/. Lipreading performance on/t/,/k/,/t∫/,/∫/,/s/,/i/and/æ/depended on context. The features labial, rounded, and alveolar or palatal place of articulation were found to transmit more information to lipreaders than did the feature continuant. Variability in articulatory parameters resulting from coarticulatory effects appears to increase overall lipreading difficulty.


1997 ◽  
Vol 40 (6) ◽  
pp. 1445-1457 ◽  
Author(s):  
Mark S. Hedrick ◽  
Arlene Earley Carney

Previous studies have shown that manipulation of a particular frequency region of the consonantal portion of a syllable relative to the amplitude of the same frequency region in an adjacent vowel influences the perception of place of articulation. This manipulation has been called the relative amplitude cue. Earlier studies have examined the effect of relative amplitude and formant transition manipulations upon labeling place of articulation for fricatives and stop consonants in listeners with normal hearing. The current study sought to determine if (a) the relative amplitude cue is used by adult listeners wearing a cochlear implant to label place of articulation, and (b) adult listeners wearing a cochlear implant integrated the relative amplitude and formant transition information differently than listeners with normal hearing. Sixteen listeners participated in the study, 12 with normal hearing and 4 postlingually deafened adults wearing the Nucleus 22 electrode Mini Speech Processor implant with the multipeak processing strategy. The stimuli used were synthetic consonant-vowel (CV) syllables in which relative amplitude and formant transitions were manipulated. The two speech contrasts examined were the voiceless fricative contrast /s/-/∫/and the voiceless stop consonant contrast /p/-/t/. For each contrast, listeners were asked to label the consonant sound in the syllable from the two response alternatives. Results showed that (a) listeners wearing this implant could use relative amplitude to consistently label place of articulation, and (b) listeners with normal hearing integrated the relative amplitude and formant transition information more than listeners wearing a cochlear implant, who weighted the relative amplitude information as much as 13 times that of the transition information.


2013 ◽  
Vol 24 (01) ◽  
pp. 017-025 ◽  
Author(s):  
Karrie L. Recker ◽  
Brent W. Edwards

Background: Acceptable noise level (ANL) is a measure of the maximum amount of background noise that a listener is willing to “put up with” while listening to running speech. This test is unique in that it can predict with a high degree of accuracy who will be a successful hearing-aid wearer. Individuals who tolerate high levels of background noise are generally successful hearing-aid wearers, whereas individuals who do not tolerate background noise well are generally unsuccessful hearing-aid wearers. Purpose: Various studies have been unsuccessful in trying to relate ANLs to listener characteristics or other test results. Presumably, understanding the perceptual mechanism by which listeners determine their ANLs could provide an understanding of the ANL's unique predictive abilities and our current inability to correlate these results with other listener attributes or test results. As a first step in investigating this problem, the relationships between ANLs and other threshold measures where listeners adjust the signal-to-noise ratio (SNR) according to some criterion in a way similar to the ANL measure were examined. Research Design and Study Sample: Ten normal-hearing and 10 hearing-impaired individuals participated in a laboratory experiment that followed a within-subjects, repeated-measures design. Data Collection and Analysis: Participants were seated in a sound booth. Running speech and noise (eight-talker babble) were presented from a loudspeaker at 0°, 3 ft in front of the participant. Individuals adjusted either the level of the speech or the level of the background noise. Specifically, with the speech fixed at different levels (50, 63, 75, or 88 dBA), participants performed the ANL task, in which they adjusted the level of the background noise to the maximum level at which they were willing to listen while following the speech. With the noise fixed at different levels (50, 60, 70, or 80 dBA), participants adjusted the level of the speech to the minimum, preferred, or maximum levels at which they were willing to listen while following the speech. Additionally, for the minimum acceptable speech level task, each participant was tested at four participant-specific noise levels, based on his/her ANL results. To emphasize that the speech level was adjusted in these measurements, three new terms were coined: “minimum acceptable speech level” (MinASL), “preferred speech level” (PSL), and “maximum acceptable speech level” (MaxASL). Each condition was presented twice, and the results were averaged. Test order and presentation level were randomized. Hearing-impaired participants were tested in the aided condition only. Results: For most participants, as the presentation level increased, SNRs increased for the ANL test but decreased for the MinASL, PSL, and MaxASL tests. For a few participants, ANLs were similar to MinASLs. For most test conditions, the normal-hearing results were not significantly different from those of the hearing-impaired participants. Conclusions: For most participants, stimulus level affected the SNRs at which they were willing to listen. However, a subset of listeners was willing to listen at a constant SNR for the ANL and MinASL tests. Furthermore, for these individuals, ANLs and MinASLs were roughly equal, suggesting that these individuals may have used the same perceptual criterion for both tests.


1988 ◽  
Vol 83 (4) ◽  
pp. 1608-1614 ◽  
Author(s):  
Julie Mapes Lindholm ◽  
Michael Dorman ◽  
Bonnie Ellen Taylor ◽  
Maureen T. Hannley

1984 ◽  
Vol 27 (1) ◽  
pp. 112-118 ◽  
Author(s):  
Deborah Johnson ◽  
Patricia Whaley ◽  
M. F. Dorman

To assess whether young hearing-impaired listeners are as sensitive as normal-hearing children to the cues for stop consonant voicing, we presented stimuli from along VOT continua to young normal-hearing listeners and to listeners with mild, moderate, severe, and profound hearing impairments. The response measures were the location of the phonetic boundaries, the change in boundaries with changes in place of articulation, and response variability. The listeners with normal hearing sensitivity and those with mild and moderate hearing impairments did not differ in performance on any response measure. The listeners with severe impairments did not show the expected change in VOT boundary with changes in place of articulation. Moreover, stimulus uncertainty (i.e., the number of possible choices in the response set) affected their response variability. One listener with profound impairment was able to process the cues for voicing in a normal fashion under conditions of minimum stimulus uncertainty. We infer from these results that the cochlear damage which underlies mild and moderate hearing impairment does not significantly alter the auditory representation of VOT. However, the cochlear damage underlying severe impairment, possibly interacting with high signal presentation levels, does alter the auditory representation of VOT.


Sign in / Sign up

Export Citation Format

Share Document