scholarly journals The what and where of synchronous sound perception

2021 ◽  
Author(s):  
Guus C. van Bentum ◽  
John Van Opstal ◽  
Marc Mathijs van Wanrooij

Sound localization and identification are challenging in acoustically rich environments. The relation between these two processes is still poorly understood. As natural sound-sources rarely occur exactly simultaneously, we wondered whether the auditory system could identify ('what') and localize ('where') two spatially separated sounds with synchronous onsets. While listeners typically report hearing a single source at an average location, one study found that both sounds may be accurately localized if listeners are explicitly being told two sources exist. We here tested whether simultaneous source identification (one vs. two) and localization is possible, by letting listeners choose to make either one or two head-orienting saccades to the perceived location(s). Results show that listeners could identify two sounds only when presented on different sides of the head, and that identification accuracy increased with their spatial separation. Notably, listeners were unable to accurately localize either sound, irrespective of whether one or two sounds were identified. Instead, the first (or only) response always landed near the average location, while second responses were unrelated to the targets. We conclude that localization of synchronous sounds in the absence of prior information is impossible. We discuss that the putative cortical 'what' pathway may not transmit relevant information to the 'where' pathway. We examine how a broadband interaural correlation cue could help to correctly identify the presence of two sounds without being able to localize them. We propose that the persistent averaging behavior reveals that the 'where' system intrinsically assumes that synchronous sounds originate from a single source.

1979 ◽  
Vol 42 (6) ◽  
pp. 1518-1526 ◽  
Author(s):  
J. L. Cranford

1. A currently unresolved question concerning the effects of auditory decortication on sound localization is whether or not operated animals have a normal capacity for discriminating the small interaural differences in phase angle or intensity that result from the spatial separation of sound sources relative to the head. The present experiment was designed to provide data relevant to this question. 2. Four normal and three operated cats (bilateral ablations of AI, AII Ep, SII, I-T), wearing stereo headsets, were tested with an active avoidance procedure to detect reversals in the interaural phase-angle or intensity relations of binaural 1-kHz tones. For both groups of cats, the detection thresholds for interaural intensity and phase angle were found to be close to 1 dB and 5 degrees, respectively. 3. In addition, we found that both unoperated and operated cats exhibited positive transfer from the original lateralization task involving the detection of interaural reversals of phase angle or intensity to a new test, which required the cats to identify, in an absolute sense, which ear received the leading or louder signals. 4. Thus, the present investigation provides additional evidence that the neocortex has no primary sensory role in sound localization.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2011 ◽  
Vol 7 (6) ◽  
pp. 836-839 ◽  
Author(s):  
Josefin Starkhammar ◽  
Patrick W. Moore ◽  
Lois Talmadge ◽  
Dorian S. Houser

Recent recordings of dolphin echolocation using a dense array of hydrophones suggest that the echolocation beam is dynamic and can at times consist of a single dominant peak, while at other times it consists of forward projected primary and secondary peaks with similar energy, partially overlapping in space and frequency bandwidth. The spatial separation of the peaks provides an area in front of the dolphin, where the spectral magnitude slopes drop off quickly for certain frequency bands. This region is potentially used to optimize prey localization by directing the maximum pressure slope of the echolocation beam at the target, rather than the maximum pressure peak. The dolphin was able to steer the beam horizontally to a greater extent than previously described. The complex and dynamic sound field generated by the echolocating dolphin may be due to the use of two sets of phonic lips as sound sources, or an unknown complexity in the sound propagation paths or acoustic properties of the forehead tissues of the dolphin.


2020 ◽  
Author(s):  
Zekun Chen ◽  
Linning Peng ◽  
Aiqun Hu ◽  
Hua Fu

Abstract With the dramatic development of the internet of things (IoT), security issues such as identity authentication have received serious attention. The radio frequency (RF) fingerprint of IoT device is an inherent feature, which can hardly be imitated. In this paper, we propose a rogue device identification technique via RF fingerprinting using deep learning-based generative adversarial network (GAN). Being different from traditional classification problems in RF fingerprint identifications, this work focuses on unknown accessing device recognition without prior information. A differential constellation trace figure (DCTF) generation process is initially employed to transform RF fingerprint features from time-domain waveforms to 2-dimensional (2D) figures. Then, by using GAN, which is a kind of unsupervised learning algorithm, we can discriminate rogue devices without any prior information. An experimental verification system is built with 54 ZigBee devices regarded as recognized devices and accessing devices. A USRP receiver is used to capture the signal and identify the accessing devices. Experimental results show that the proposed rogue device identification method can achieve 95% identification accuracy in a real environment.


2021 ◽  
Vol 17 (11) ◽  
pp. e1009569
Author(s):  
Julia C. Gorman ◽  
Oliver L. Tufte ◽  
Anna V. R. Miller ◽  
William M. DeBello ◽  
José L. Peña ◽  
...  

Emergent response properties of sensory neurons depend on circuit connectivity and somatodendritic processing. Neurons of the barn owl’s external nucleus of the inferior colliculus (ICx) display emergence of spatial selectivity. These neurons use interaural time difference (ITD) as a cue for the horizontal direction of sound sources. ITD is detected by upstream brainstem neurons with narrow frequency tuning, resulting in spatially ambiguous responses. This spatial ambiguity is resolved by ICx neurons integrating inputs over frequency, a relevant processing in sound localization across species. Previous models have predicted that ICx neurons function as point neurons that linearly integrate inputs across frequency. However, the complex dendritic trees and spines of ICx neurons raises the question of whether this prediction is accurate. Data from in vivo intracellular recordings of ICx neurons were used to address this question. Results revealed diverse frequency integration properties, where some ICx neurons showed responses consistent with the point neuron hypothesis and others with nonlinear dendritic integration. Modeling showed that varied connectivity patterns and forms of dendritic processing may underlie observed ICx neurons’ frequency integration processing. These results corroborate the ability of neurons with complex dendritic trees to implement diverse linear and nonlinear integration of synaptic inputs, of relevance for adaptive coding and learning, and supporting a fundamental mechanism in sound localization.


2006 ◽  
Vol 95 (6) ◽  
pp. 3571-3584 ◽  
Author(s):  
Matthew W. Spitzer ◽  
Terry T. Takahashi

We examined the accuracy and precision with which the barn owl ( Tyto alba) turns its head toward sound sources under conditions that evoke the precedence effect (PE) in humans. Stimuli consisted of 25-ms noise bursts emitted from two sources, separated horizontally by 40°, and temporally by 3–50 ms. At delays from 3 to 10 ms, head turns were always directed at the leading source, and were nearly as accurate and precise as turns toward single sources, indicating that the leading source dominates perception. This lead dominance is particularly remarkable, first, because on some trials, the lagging source was significantly higher in amplitude than the lead, arising from the directionality of the owl's ears, and second, because the temporal overlap of the two sounds can degrade the binaural cues with which the owl localizes sounds. With increasing delays, the influence of the lagging source became apparent as the head saccades became increasingly biased toward the lagging source. Furthermore, on some of the trials at delays ≥20 ms, the owl turned its head, first, in the direction of one source, and then the other, suggesting that it was able to resolve two separately localizable sources. At all delays <50 ms, response latencies were longer for paired sources than for single sources. With the possible exception of response latency, these findings demonstrate that the owl exhibits precedence phenomena in sound localization similar to those in humans and cats, and provide a basis for comparison with neurophysiological data.


1975 ◽  
Vol 63 (3) ◽  
pp. 569-585 ◽  
Author(s):  
D. L. Renaud ◽  
A. N. Popper

1. Sound localization was measured behaviourally for the Atlantic bottlenose porpoise (Tursiops truncatus) using a wide range of pure tone pulses as well as clicks simulating the species echolocation click. 2. Measurements of the minimum audible angle (MAA) on the horizontal plane give localization discrimination thresholds of between 2 and 3 degrees for sounds from 20 to 90 kHz and thresholds from 2–8 to 4 degrees at 6, 10 and 100 kHz. With the azimuth of the animal changed relative to the speakers the MAAs were 1-3-1-5 degrees at an azimuth of 15 degrees and about 5 degrees for an azimuth of 30 degrees. 3. MAAs to clicks were 0-7-0-8 degrees. 4. The animal was able to do almost as well in determining the position of vertical sound sources as it could for horizontal localization. 5. The data indicate that at low frequencies the animal may have been localizing by using the region around the external auditory meatus as a detector, but at frequencies about 20 kHz it is likely that the animal was detecting sounds through the lateral sides of the lower jaw. 6. Above 20 kHz, it is likely that the animal was localizing using binaural intensity cues. 7. Our data support evidence that the lower jaw is an important channel for sound detection in Tursiops.


2021 ◽  
Vol 105 ◽  
pp. 291-301
Author(s):  
Wei Wang ◽  
Cheng Sheng Sun ◽  
Jia Ning Ye

With more and more malicious traffic using TLS protocol encryption, efficient identification of TLS malicious traffic has become an increasingly important task in network security management in order to ensure communication security and privacy. Most of the traditional traffic identification methods on TLS malicious encryption only adopt the common characteristics of ordinary traffic, which results in the increase of coupling among features and then the low identification accuracy. In addition, most of the previous work related to malicious traffic identification extracted features directly from the data flow without recording the extraction process, making it difficult for subsequent traceability. Therefore, this paper implements an efficient feature extraction method with structural correlation for TLS malicious encrypted traffic. The traffic feature extraction process is logged in modules, and the index is used to establish relevant information links, so as to analyse the context and facilitate subsequent feature analysis and problem traceability. Finally, Random Forest is used to realize efficient TLS malicious traffic identification with an accuracy of up to 99.38%.


2011 ◽  
Vol 22 (06) ◽  
pp. 313-331 ◽  
Author(s):  
Véronique Vaillancourt ◽  
Chantal Laroche ◽  
Christian Giguère ◽  
Marc-André Beaulieu ◽  
Jean-Pierre Legault

Background: Auditory fitness for duty (AFFD) testing is an important element in an assessment of workers’ ability to perform job tasks safely and effectively. Functional hearing is particularly critical to job performance in law enforcement. Most often, assessment is based on pure-tone detection thresholds; however, its validity can be questioned and challenged in court. In an attempt to move beyond the pure-tone audiogram, some organizations like the Royal Canadian Mounted Police (RCMP) are incorporating additional testing to supplement audiometric data in their AFFD protocols, such as measurements of speech recognition in quiet and/or in noise, and sound localization. Purpose: This article reports on the assessment of RCMP officers wearing hearing aids in speech recognition and sound localization tasks. The purpose was to quantify individual performance in different domains of hearing identified as necessary components of fitness for duty, and to document the type of hearing aids prescribed in the field and their benefit for functional hearing. The data are to help RCMP in making more informed decisions regarding AFFD in officers wearing hearing aids. Research Design: The proposed new AFFD protocol included unaided and aided measures of speech recognition in quiet and in noise using the Hearing in Noise Test (HINT) and sound localization in the left/right (L/R) and front/back (F/B) horizontal planes. Sixty-four officers were identified and selected by the RCMP to take part in this study on the basis of hearing thresholds exceeding current audiometrically based criteria. This article reports the results of 57 officers wearing hearing aids. Results: Based on individual results, 49% of officers were reclassified from nonoperational status to operational with limitations on fine hearing duties, given their unaided and/or aided performance. Group data revealed that hearing aids (1) improved speech recognition thresholds on the HINT, the effects being most prominent in Quiet and in conditions of spatial separation between target and noise (Noise Right and Noise Left) and least considerable in Noise Front; (2) neither significantly improved nor impeded L/R localization; and (3) substantially increased F/B errors in localization in a number of cases. Additional analyses also pointed to the poor ability of threshold data to predict functional abilities for speech in noise (r2 = 0.26 to 0.33) and sound localization (r2 = 0.03 to 0.28). Only speech in quiet (r2 = 0.68 to 0.85) is predicted adequately from threshold data. Conclusions: Combined with previous findings, results indicate that the use of hearing aids can considerably affect F/B localization abilities in a number of individuals. Moreover, speech understanding in noise and sound localization abilities were poorly predicted from pure-tone thresholds, demonstrating the need to specifically test these abilities, both unaided and aided, when assessing AFFD. Finally, further work is needed to develop empirically based hearing criteria for the RCMP and identify best practices in hearing aid fittings for optimal functional hearing abilities.


Sign in / Sign up

Export Citation Format

Share Document