Effects of spatial configuration on search of visual displays.

1980 ◽  
Author(s):  
Lauritz Braennstroem
2019 ◽  
Vol 62 (3) ◽  
pp. 745-757 ◽  
Author(s):  
Jessica M. Wess ◽  
Joshua G. W. Bernstein

PurposeFor listeners with single-sided deafness, a cochlear implant (CI) can improve speech understanding by giving the listener access to the ear with the better target-to-masker ratio (TMR; head shadow) or by providing interaural difference cues to facilitate the perceptual separation of concurrent talkers (squelch). CI simulations presented to listeners with normal hearing examined how these benefits could be affected by interaural differences in loudness growth in a speech-on-speech masking task.MethodExperiment 1 examined a target–masker spatial configuration where the vocoded ear had a poorer TMR than the nonvocoded ear. Experiment 2 examined the reverse configuration. Generic head-related transfer functions simulated free-field listening. Compression or expansion was applied independently to each vocoder channel (power-law exponents: 0.25, 0.5, 1, 1.5, or 2).ResultsCompression reduced the benefit provided by the vocoder ear in both experiments. There was some evidence that expansion increased squelch in Experiment 1 but reduced the benefit in Experiment 2 where the vocoder ear provided a combination of head-shadow and squelch benefits.ConclusionsThe effects of compression and expansion are interpreted in terms of envelope distortion and changes in the vocoded-ear TMR (for head shadow) or changes in perceived target–masker spatial separation (for squelch). The compression parameter is a candidate for clinical optimization to improve single-sided deafness CI outcomes.


1971 ◽  
Vol 36 (3) ◽  
pp. 397-409 ◽  
Author(s):  
Rachel E. Stark

Real-time amplitude contour and spectral displays were used in teaching speech production skills to a profoundly deaf, nonspeaking boy. This child had a visual attention problem, a behavior problem, and a poor academic record. In individual instruction, he was first taught to produce features of speech, for example, friction, nasal, and stop, which are present in vocalizations of 6- to 9-month-old infants, and then to combine these features in syllables and words. He made progress in speech, although sign language and finger spelling were taught at the same time. Speech production skills were retained after instruction was terminated. The results suggest that deaf children are able to extract information about the features of speech from visual displays, and that a developmental sequence should be followed as far as possible in teaching speech production skills to them.


1961 ◽  
Author(s):  
Milton H. Hodge ◽  
Morris J. Crawford ◽  
Mary L. Piercy

Sign in / Sign up

Export Citation Format

Share Document