Detection of a tone burst in continuous‐ and gated‐noise maskers; defects of signal frequency, duration, and masker level

1977 ◽  
Vol 61 (5) ◽  
pp. 1298-1300 ◽  
Author(s):  
Craig C. Wier ◽  
David M. Green ◽  
Ervin R. Hafter ◽  
S. Burkhardt
1997 ◽  
Vol 101 (3) ◽  
pp. 1600-1610 ◽  
Author(s):  
Sid P. Bacon ◽  
Jungmee Lee ◽  
Daniel N. Peterson ◽  
Dawne Rainey

1993 ◽  
Vol 36 (2) ◽  
pp. 410-423 ◽  
Author(s):  
Joseph W. Hall ◽  
John H. Grose ◽  
Brian C. J. Moore

Experiments 1 and 2 investigated the effect of frequency selectivity on comodulation masking release (CMR) in normal-hearing subjects, examining conditions where frequency selectivity was relatively good (low masker level at both low [500-Hz] and high [2500-Hz] signal frequency, and high masker level at low signal frequency) and where frequency selectivity was somewhat degraded (high masker level and high signal frequency). The first experiment investigated CMR in conditions where a narrow modulated noise band was centered on the signal frequency, and a wider comodulated noise band was located below the band centered on the signal frequency. Signal frequencies were 500 and 2000 Hz. The masker level and the frequency separation between the on-signal and comodulated flanking band were varied. In addition to conditions where the flanking band and on-signal band were presented at the same spectrum level, conditions were included where the spectrum level of the flanking band was 10-dB higher than that of the on-signal band, in order to accentuate effects of reduced frequency selectivity. Results indicated that CMR was reduced at the 2000-Hz region when masker level was high, when the frequency separation between on-signal and flanking band was small, and when a 10-dB level disparity existed between the on-signal and flanking band. In the second experiment, CMR was investigated for narrow comodulated noise bands, presented either without any additional sound or in the presence of a random noise background. CMR increased slightly as the masker level increased, except at 2500 Hz when the noise background was present. The decrease in CMR at 2500 Hz with the high masker level and with a noise background present could be explained in terms of reduced frequency selectivity. In a third experiment, we compared performance for equal absolute bandwidth maskers at a low (500-Hz) and a high (2000-Hz) stimulus frequency. Results here suggested that detection in modulated noise may be reduced due to a reduction in the number of quasi-independent auditory filters contributing temporal envelope information. The effects found in the present study using normal-hearing listeners under conditions of degraded frequency selectivity may be useful in understanding part of the reduction of CMR that occurs in cochlear-impaired listeners having reduced frequency selectivity.


1976 ◽  
Vol 39 (1) ◽  
pp. 162-178 ◽  
Author(s):  
R. Britt ◽  
A. Starr

Unitary discharge patterns (peristimulus time histograms or PSTH) and synaptic events were studies with intracellular recording techniques in 164 cat cochlear nucleus cells to steady-frequency tone bursts 250 ms in duration. There were four response types defined on the basis of the shape of the discharge patterns to tones at the characteristic or best frequency. Primarylike units resemble eighth nerve fibres and have a maximum discharge at tone onset, followed by a smooth decline to a steady level of activity. Buildup units have a transient response at tone onset, followed a period of little or not activity before gradually increasing their discharge rate for the remainder of the tone burst. Onset units have an initial burst of spikes at the onset, with little or no activity for the remainder of the tone burst. Pause units have a long latency (10-30 ms) between tone onset and the appearance of low levels of unit activity, which then gradually increase in rate for the remainder of the tone burst. Changes in signal frequency or intensity within the excitatory response area did not modify response patterns of primarylike and onset units, but could evoke primarylike patterns in buildup and pause units. Inhibition manifested by suppression of spontaneous activity and membrane hyperpolarization were of three kinds: 1) in response to signals at the edges of the excitatory response area (i.e., the inhibitory surround) and detected in onset buildup, and pause units but not in primarylike units; 2) occurring at the offset of tones in the excitatory response area and detected in all four types of cochlear nucleus cells; 3) during excitatory tone bursts in onset and buildup units associated with the periods of suppressed unit activity. Membrane hyperpolarization did not accompany the delay in unit activity after tone onset in pause units. Inhibitory events in cochlear nucleus cells provide mechanisms for producing diversity in the temporal pattern of discharges to acoustic signals which may underly the encoding of complex features of sounds.


1982 ◽  
Vol 25 (3) ◽  
pp. 456-461
Author(s):  
Michael P. Gorga ◽  
Paul J. Abbas

A number of methods are presented for evaluating the effects of high-pass noise on the whole-nerve action potential (AP). These methods include measurements of AP thresholds, amplitude-versus-level functions, decrement in AP amplitude-versus-masker level functions, and AP tuning curves. Examinations of threshold shifts as a function of tone-burst frequency and AP amplitude-versus-level with and without the presentation of high-pass noise indicate that basal portions of the cochlear partition can be masked effectively. Decrement in AP amplitude-versus-masker level functions and subsequently constructed AP tuning curves were used to verify that the presentation of high-pass noise did not alter the frequency response of that region of the basilar membrane responding to a 4000-Hz tone-burst probe. As a result, we conclude that high-pass noise may be used to mask the response from remote regions of the cochlea without altering response characteristics from lower frequency regions.


1969 ◽  
Vol 12 (1) ◽  
pp. 199-209 ◽  
Author(s):  
David A. Nelson ◽  
Frank M. Lassman ◽  
Richard L. Hoel

Averaged auditory evoked responses to 1000-Hz 20-msec tone bursts were obtained from normal-hearing adults under two different intersignal interval schedules: (1) a fixed-interval schedule with 2-sec intersignal intervals, and (2) a variable-interval schedule of intersignal intervals ranging randomly from 1.0 sec to 4.5 sec with a mean of 2 sec. Peak-to-peak amplitudes (N 1 — P 2 ) as well as latencies of components P 1 , N 1 , P 2 , and N 2 were compared under the two different conditions of intersignal interval. No consistent or significant differences between variable- and fixed-interval schedules were found in the averaged responses to signals of either 20 dB SL or 50 dB SL. Neither were there significant schedule differences when 35 or 70 epochs were averaged per response. There were, however, significant effects due to signal amplitude and to the number of epochs averaged per response. Response amplitude increased and response latency decreased with sensation level of the tone burst.


Sign in / Sign up

Export Citation Format

Share Document