noise burst
Recently Published Documents


TOTAL DOCUMENTS

74
(FIVE YEARS 11)

H-INDEX

20
(FIVE YEARS 0)

2021 ◽  
Vol 12 (1) ◽  
pp. 173
Author(s):  
Akio Honda ◽  
Kei Maeda ◽  
Shuichi Sakamoto ◽  
Yôiti Suzuki

The deterioration of sound localization accuracy during a listener’s head/body rotation is independent of the listener’s rotation velocity (Honda et al., 2016). However, whether this deterioration occurs only during physical movement in a real environment remains unclear. In this study, we addressed this question by subjecting physically stationary listeners to visually induced self-motion, i.e., vection. Two conditions—one with a visually induced perception of self-motion (vection) and the other without vection (control)—were adopted. Under both conditions, a short noise burst (30 ms) was presented via a loudspeaker in a circular array placed horizontally in front of a listener. The listeners were asked to determine whether the acoustic stimulus was localized relative to their subjective midline. The results showed that in terms of detection thresholds based on the subjective midline, the sound localization accuracy was lower under the vection condition than under the control condition. This indicates that sound localization can be compromised under visually induced self-motion perception. These findings support the idea that self-motion information is crucial for auditory space perception and can potentially enable the design of dynamic binaural displays requiring fewer computational resources.


2021 ◽  
Vol 12 ◽  
Author(s):  
Elena Selezneva ◽  
Michael Brosch ◽  
Sanchit Rathi ◽  
T. Vighneshvel ◽  
Nicole Wetzel

Pupil dilation in response to unexpected stimuli has been well documented in human as well as in non-human primates; however, this phenomenon has not been systematically compared between the species. This analogy is also crucial for the role of non-human primates as an animal model to investigate neural mechanisms underlying the processing of unexpected stimuli and their evoked pupil dilation response. To assess this qualitatively, we used an auditory oddball paradigm in which we presented subjects a sequence of the same sounds followed by occasional deviants while we measured their evoked pupil dilation response (PDR). We used deviants (a frequency deviant, a pink noise burst, a monkey vocalization and a whistle sound) which differed in the spectral composition and in their ability to induce arousal from the standard. Most deviants elicited a significant pupil dilation in both species with decreased peak latency and increased peak amplitude in monkeys compared to humans. A temporal Principal Component Analysis (PCA) revealed two components underlying the PDRs in both species. The early component is likely associated to the parasympathetic nervous system and the late component to the sympathetic nervous system, respectively. Taken together, the present study demonstrates a qualitative similarity between PDRs to unexpected auditory stimuli in macaque and human subjects suggesting that macaques can be a suitable model for investigating the neuronal bases of pupil dilation. However, the quantitative differences in PDRs between species need to be investigated in further comparative studies.


2021 ◽  
Author(s):  
Drew Cappotto ◽  
HiJee Kang ◽  
Kongyan Li ◽  
Lucia Melloni ◽  
Jan Schnupp ◽  
...  

AbstractRecent studies have shown that stimulus history can be decoded via the use of broadband sensory impulses to reactivate mnemonic representations. It has also been shown that predictive mechanisms in the auditory system demonstrate similar tonotopic organization of neural activity as that elicited by the perceived stimuli. However, it remains unclear if the mnemonic and predictive information can be decoded from cortical activity simultaneously and from overlapping neural populations. Here, we recorded neural activity using electrocorticography (ECoG) in the auditory cortex of anesthetized rats while exposed to repeated stimulus sequences, where events within the sequence were occasionally replaced with a broadband noise burst or omitted entirely. We show that both stimulus history and predicted stimuli can be decoded from neural responses to broadband impulse at overlapping latencies but linked to largely independent neural populations. We also demonstrate that predictive representations are learned over the course of stimulation at two distinct time scales, reflected in two dissociable time windows of neural activity. These results establish a valuable tool for investigating the neural mechanisms of passive sequence learning, memory encoding, and prediction mechanisms within a single paradigm, and provide novel evidence for learning predictive representations even under anaesthesia.


Author(s):  
Stacey G. Kane ◽  
Kelly M. Dean ◽  
Emily Buss

Purpose Knowing target location can improve adults' speech-in-speech recognition in complex auditory environments, but it is unknown whether young children listen selectively in space. This study evaluated masked word recognition with and without a pretrial cue to location to characterize the influence of listener age and masker type on the benefit of spatial cues. Method Participants were children (5–13 years of age) and adults with normal hearing. Testing occurred in a 180° arc of 11 loudspeakers. Targets were spondees produced by a female talker and presented from a randomly selected loudspeaker; that location was either known, based on a pretrial cue, or unknown. Maskers were two sequences comprising spondees or speech-shaped noise bursts, each presented from a random loudspeaker. Speech maskers were produced by one male talker or by three talkers, two male and one female. Results Children and adults benefited from the pretrial cue to target location with the three-voice masker, and the magnitude of benefit increased with increasing child age. There was no benefit of location cues in the one-voice or noise-burst maskers. Incorrect responses in the three-voice masker tended to correspond to masker words produced by the female talker, and in the location-known condition, those masker intrusions were more likely near the cued loudspeaker for both age groups. Conclusions Increasing benefit of the location cue with increasing child age in the three-voice masker suggests maturation of spatially selective attention, but error patterns do not support this idea. Differences in performance in the location-unknown condition could play a role in the differential benefit of the location cue.


2021 ◽  
Vol 15 ◽  
Author(s):  
Qianyi Cao ◽  
Noah Parks ◽  
Joshua H. Goldwyn

Illusions give intriguing insights into perceptual and neural dynamics. In the auditory continuity illusion, two brief tones separated by a silent gap may be heard as one continuous tone if a noise burst with appropriate characteristics fills the gap. This illusion probes the conditions under which listeners link related sounds across time and maintain perceptual continuity in the face of sudden changes in sound mixtures. Conceptual explanations of this illusion have been proposed, but its neural basis is still being investigated. In this work we provide a dynamical systems framework, grounded in principles of neural dynamics, to explain the continuity illusion. We construct an idealized firing rate model of a neural population and analyze the conditions under which firing rate responses persist during the interruption between the two tones. First, we show that sustained inputs and hysteresis dynamics (a mismatch between tone levels needed to activate and inactivate the population) can produce continuous responses. Second, we show that transient inputs and bistable dynamics (coexistence of two stable firing rate levels) can also produce continuous responses. Finally, we combine these input types together to obtain neural dynamics consistent with two requirements for the continuity illusion as articulated in a well-known theory of auditory scene analysis: responses persist through the noise-filled gap if noise provides sufficient evidence that the tone continues and if there is no evidence of discontinuities between the tones and noise. By grounding these notions in a quantitative model that incorporates elements of neural circuits (recurrent excitation, and mutual inhibition, specifically), we identify plausible mechanisms for the continuity illusion. Our findings can help guide future studies of neural correlates of this illusion and inform development of more biophysically-based models of the auditory continuity illusion.


Author(s):  
Wolfgang Ellermeier ◽  
Florian Kattner ◽  
Anika Raum

AbstractIn their fundamental paper, Luce, Steingrimsson, and Narens (2010, Psychological Review, 117, 1247-1258) proposed that ratio productions constituting a generalization of cross-modality matching may be represented on a single scale of subjective intensity, if they meet “cross-dimensional commutativity.” The present experiment is the first to test this axiom by making truly cross-modal adjustments of the type: “Make the sound three times as loud as the light appears bright!” Twenty participants repeatedly adjusted the level of a burst of noise to result in the desired sensation ratio (e.g., to be three times as intense) compared to the brightness emanating from a grayscale square, and vice versa. Cross-modal commutativity was tested by comparing a set of successive ×2×3 productions with a set of ×3×2 productions. When this property was individually evaluated for each of 20 participants and for two possible directions, i.e., starting out with a noise burst or a luminous patch, only seven of the 40 tests indicated a statistically significant violation of cross-modal commutativity. Cross-modal monotonicity, i.e. checking whether ×1, ×2, and ×3 adjustments are strictly ordered, was evaluated on the same data set and found to hold. Multiplicativity, by contrast, i.e., comparing the outcome of a ×1×6 adjustment with ×2×3 sequences, irrespective of order, was violated in 17 of 40 tests, or at least once for all but six participants. This suggests that both loudness and brightness sensations may be measured on a common ratio scale of subjective intensity, but cautions against interpreting the numbers involved at face value.


2021 ◽  
Author(s):  
Qianyi Cao ◽  
Noah Parks ◽  
Joshua H. Goldwyn

ABSTRACTIllusions give intriguing insights into perceptual and neural dynamics. In the auditory continuity illusion, two brief tones separated by a silent gap may be heard as one continuous tone if a noise burst with appropriate characteristics fills the gap. This illusion probes the conditions under which listeners link related sounds across time and maintain perceptual continuity in the face of sudden changes in sound mixtures. Conceptual explanations of this illusion have been proposed, but its neural basis is still being investigated. In this work we provide a dynamical systems framework, grounded in principles of neural dynamics, to explain the continuity illusion. We construct an idealized firing rate model of a neural population and analyze the conditions under which firing rate responses persist during the interruption between the two tones. First, we show that sustained inputs and hysteresis dynamics (a mismatch between tone levels needed to activate and inactivate the population) can produce continuous responses. Second, we show that transient inputs and bistable dynamics (coexistence of two stable firing rate levels) can also produce continuous responses. Finally, we combine these input types together to obtain neural dynamics consistent with two requirements for the continuity illusion as articulated in a well-known theory of auditory scene analysis: sustained responses occur if noise provides sufficient evidence that the tone continues and if there is no evidence of discontinuities between the tones and noise. By grounding these notions in a quantitative model that incorporates elements of neural circuits (recurrent excitation, and mutual inhibition, specifically), we identify plausible mechanisms for the continuity illusion. Our findings can help guide future studies of neural correlate of this illusion and inform development of more biophysically-based models of the auditory continuity illusion.


2021 ◽  
Vol 35 (1) ◽  
pp. 35-42
Author(s):  
José Luis Marcos ◽  
Azahara Marcos

Abstract. The aim of this study was to determine if contingency awareness between the conditioned (CS) and unconditioned stimulus (US) is necessary for concurrent electrodermal and eyeblink conditioning to masked stimuli. An angry woman’s face (CS+) and a fearful face (CS−) were presented for 23 milliseconds (ms) and followed by a neutral face as a mask. A 98 dB noise burst (US) was administered 477 ms after CS+ offset to elicit both electrodermal and eyeblink responses. For the unmasking conditioning a 176 ms blank screen was inserted between the CS and the mask. Contingency awareness was assessed using trial-by-trial ratings of US-expectancy in a post-conditioning phase. The results showed acquisition of differential electrodermal and eyeblink conditioning in aware, but not in unaware participants. Acquisition of differential eyeblink conditioning required more trials than electrodermal conditioning. These results provided strong evidence of the causal role of contingency awareness on differential eyeblink and electrodermal conditioning.


Author(s):  
Thirsa Huisman ◽  
Torsten Dau ◽  
Tobias Piechowiak ◽  
Ewen MacDonald

Despite more than 60 years of research, it has remained uncertain if and how realism affects the ventriloquist effect. Here, a sound localization experiment was run using spatially disparate audio-visual stimuli. The visual stimuli were presented using virtual reality, allowing for easy manipulation of the degree of realism of the stimuli. Starting from stimuli commonly used in ventriloquist experiments, i.e., a light flash and noise burst, a new factor was added or changed in each condition to investigate the effect of movement and realism without confounding the effects of an increased temporal correlation of the audio-visual stimuli. First, a distractor task was introduced to ensure that participants fixated their eye gaze during the experiment. Next, movement was added to the visual stimuli while maintaining a similar temporal correlation between the stimuli. Finally, by changing the stimuli from the flash and noise stimuli to the visuals of a bouncing ball that made a matching impact sound, the effect of realism was assessed. No evidence for an effect of realism and movement of the stimuli was found, suggesting that, in simple scenarios, the ventriloquist effect might not be affected by stimulus realism.


Author(s):  
Thirsa Huisman ◽  
Torsten Dau ◽  
Tobias Piechowiak ◽  
Ewen MacDonald

Despite more than 60 years of research, it has remained uncertain if and how realism affects the ventriloquist effect. Here, a sound localization experiment was run using spatially disparate audio-visual stimuli. The visual stimuli were presented using virtual reality, allowing for easy manipulation of the degree of realism of the stimuli. Starting from stimuli commonly used in ventriloquist experiments, i.e., a light flash and noise burst, a new factor was added or changed in each condition to investigate the effect of movement and realism without confounding the effects of an increased temporal correlation of the audio-visual stimuli. First, a distractor task was introduced to ensure that participants fixated their eye gaze during the experiment. Next, movement was added to the visual stimuli while maintaining a similar temporal correlation between the stimuli. Finally, by changing the stimuli from the flash and noise stimuli to the visuals of a bouncing ball that made a matching impact sound, the effect of realism was assessed. No evidence for an effect of realism and movement of the stimuli was found, suggesting that, in simple scenarios, the ventriloquist effect might not be affected by stimulus realism.


Sign in / Sign up

Export Citation Format

Share Document