sound localization
Recently Published Documents


TOTAL DOCUMENTS

1427
(FIVE YEARS 158)

H-INDEX

72
(FIVE YEARS 5)

i-Perception ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 204166952110706
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

This paper reports on the deterioration in sound-localization accuracy during listeners’ head and body movements. We investigated the sound-localization accuracy during passive body rotations at speeds in the range of 0.625–5 °/s. Participants were asked to determine whether a 30-ms noise stimuli emerged relative to their subjective-straight-ahead reference. Results indicated that the sound-localization resolution degraded with passive rotation, irrespective of the rotation speed, even at speeds of 0.625 °/s.


2021 ◽  
Vol 12 (1) ◽  
pp. 173
Author(s):  
Akio Honda ◽  
Kei Maeda ◽  
Shuichi Sakamoto ◽  
Yôiti Suzuki

The deterioration of sound localization accuracy during a listener’s head/body rotation is independent of the listener’s rotation velocity (Honda et al., 2016). However, whether this deterioration occurs only during physical movement in a real environment remains unclear. In this study, we addressed this question by subjecting physically stationary listeners to visually induced self-motion, i.e., vection. Two conditions—one with a visually induced perception of self-motion (vection) and the other without vection (control)—were adopted. Under both conditions, a short noise burst (30 ms) was presented via a loudspeaker in a circular array placed horizontally in front of a listener. The listeners were asked to determine whether the acoustic stimulus was localized relative to their subjective midline. The results showed that in terms of detection thresholds based on the subjective midline, the sound localization accuracy was lower under the vection condition than under the control condition. This indicates that sound localization can be compromised under visually induced self-motion perception. These findings support the idea that self-motion information is crucial for auditory space perception and can potentially enable the design of dynamic binaural displays requiring fewer computational resources.


2021 ◽  
Author(s):  
Guus C. van Bentum ◽  
John Van Opstal ◽  
Marc Mathijs van Wanrooij

Sound localization and identification are challenging in acoustically rich environments. The relation between these two processes is still poorly understood. As natural sound-sources rarely occur exactly simultaneously, we wondered whether the auditory system could identify ('what') and localize ('where') two spatially separated sounds with synchronous onsets. While listeners typically report hearing a single source at an average location, one study found that both sounds may be accurately localized if listeners are explicitly being told two sources exist. We here tested whether simultaneous source identification (one vs. two) and localization is possible, by letting listeners choose to make either one or two head-orienting saccades to the perceived location(s). Results show that listeners could identify two sounds only when presented on different sides of the head, and that identification accuracy increased with their spatial separation. Notably, listeners were unable to accurately localize either sound, irrespective of whether one or two sounds were identified. Instead, the first (or only) response always landed near the average location, while second responses were unrelated to the targets. We conclude that localization of synchronous sounds in the absence of prior information is impossible. We discuss that the putative cortical 'what' pathway may not transmit relevant information to the 'where' pathway. We examine how a broadband interaural correlation cue could help to correctly identify the presence of two sounds without being able to localize them. We propose that the persistent averaging behavior reveals that the 'where' system intrinsically assumes that synchronous sounds originate from a single source.


2021 ◽  
Vol 15 ◽  
Author(s):  
Xuexin Tian ◽  
Yimeng Liu ◽  
Zengzhi Guo ◽  
Jieqing Cai ◽  
Jie Tang ◽  
...  

Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.


2021 ◽  
Author(s):  
Jumpei Matsumoto ◽  
Kouta Kanno ◽  
Masahiro Kato ◽  
Hiroshi Nishimaru ◽  
Tsuyoshi Setogawa ◽  
...  

Ultrasonic vocalizations in mice have recently been widely investigated as social behavior; however, using existing sound localization systems in home cages, which allow observations of more undisturbed behavior expressions, is challenging. We introduce a novel system, named USVCAM, that uses a phased microphone array and demonstrate novel vocal interactions under a resident-intruder paradigm. The extended applicability and usability of USVCAM may facilitate investigations of social behaviors and underlying physiological mechanisms.


Author(s):  
Maike Klingel ◽  
Bernhard Laback

AbstractNormal-hearing (NH) listeners rely on two binaural cues, the interaural time (ITD) and level difference (ILD), for azimuthal sound localization. Cochlear-implant (CI) listeners, however, rely almost entirely on ILDs. One reason is that present-day clinical CI stimulation strategies do not convey salient ITD cues. But even when presenting ITDs under optimal conditions using a research interface, ITD sensitivity is lower in CI compared to NH listeners. Since it has recently been shown that NH listeners change their ITD/ILD weighting when only one of the cues is consistent with visual information, such reweighting might add to CI listeners’ low perceptual contribution of ITDs, given their daily exposure to reliable ILDs but unreliable ITDs. Six bilateral CI listeners completed a multi-day lateralization training visually reinforcing ITDs, flanked by a pre- and post-measurement of ITD/ILD weights without visual reinforcement. Using direct electric stimulation, we presented 100- and 300-pps pulse trains at a single interaurally place-matched electrode pair, conveying ITDs and ILDs in various spatially consistent and inconsistent combinations. The listeners’ task was to lateralize the stimuli in a virtual environment. Additionally, ITD and ILD thresholds were measured before and after training. For 100-pps stimuli, the lateralization training increased the contribution of ITDs slightly, but significantly. Thresholds were neither affected by the training nor correlated with weights. For 300-pps stimuli, ITD weights were lower and ITD thresholds larger, but there was no effect of training. On average across test sessions, adding azimuth-dependent ITDs to stimuli containing ILDs increased the extent of lateralization for both 100- and 300-pps stimuli. The results suggest that low-rate ITD cues, robustly encoded with future CI systems, may be better exploitable for sound localization after increasing their perceptual weight via training.


Author(s):  
Xinyuan Qian ◽  
Bidisha Sharma ◽  
Amine El Abridi ◽  
Haizhou Li
Keyword(s):  

2021 ◽  
Vol 12 ◽  
Author(s):  
Dennis McFadden

Earwitnesses to the 1963 assassination of President John F. Kennedy (JFK) did not agree about the location of the gunman even though their judgments about the number and timing of the gunshots were reasonably consistent. Even earwitnesses at the same general location disagreed. An examination of the acoustics of supersonic bullets and the characteristics of human sound localization help explain the general disagreement about the origin of the gunshots. The key fact is that a shock wave produced by the supersonic bullet arrived prior to the muzzle blast for many earwitnesses, and the shock wave provides erroneous information about the origin of the gunshot. During the government's official re-enactment of the JFK assassination in 1978, expert observers were highly accurate in localizing the origin of gunshots taken from either of two locations, but their supplementary observations help explain the absence of a consensus among the earwitnesses to the assassination itself.


Sign in / Sign up

Export Citation Format

Share Document