sound target
Recently Published Documents


TOTAL DOCUMENTS

9
(FIVE YEARS 3)

H-INDEX

2
(FIVE YEARS 0)

2021 ◽  
pp. 100057
Author(s):  
O. Sorg ◽  
T. Nocera ◽  
F. Fontao ◽  
N. Castex Rizzi ◽  
L. Garidou ◽  
...  
Keyword(s):  

2020 ◽  
Author(s):  
Jan-Willem Wasmann ◽  
Arno Janssen ◽  
Martijn Agterberg

We present a mobile sound localization setup suitable for measuring horizontal and vertical sound localization in children and adult patients in the convenience of their own environment.In this paper, a mobile sound localization setup is described that can be used to measure a persons’ localization performance in a sophisticated way. With this mobile setup, researchers can travel to subjects, and studies are not limited by the willingness of participants to visit the clinic. In the setup, sounds are presented within a partial sphere in both the horizontal (-70o to 70o azimuth) and vertical (-35o to 40o elevation) plane. Participants are asked to indicate the perceived sound origin by pointing with a head-mounted LED. Head movements are recorded and instantly visualized (i.e. online target response plots). Depending on the research question, the setup can be adjusted for more advanced or simplified measurements, making the setup suitable for a wide range of research questions. The rationale for building this mobile setup was to test horizontal sound localization abilities (binaural hearing) and vertical sound localization abilities (monaural hearing) of children and patients who were otherwise not accessible for testing. In this setup loudspeakers are not visible and subjects are asked to indicate the perceived sound direction by a natural head-pointing response towards the perceived location. An advantage of the implemented pointing-method is the playful manner in which children are tested. They are ‘shooting’ at the perceived sound target location with a head-mounted LED and have fun while performing the test.


AIP Advances ◽  
2019 ◽  
Vol 9 (10) ◽  
pp. 105120
Author(s):  
Mengran Liu ◽  
Lei Nie ◽  
Shanqiang Li ◽  
Wen Jia
Keyword(s):  

Author(s):  
Sajad Gholamrezaei ◽  
Shahpour Alirezaee ◽  
Arash Ahmadi ◽  
Majid Ahmadi ◽  
Shervin Erfani

2011 ◽  
Vol 225-226 ◽  
pp. 725-728 ◽  
Author(s):  
Yan Wang ◽  
Zhi Li

The present work contributes to the field of border and coastal surveillance sound target classification. A new feature extraction method is proposed based on the optimum wavelet packet decomposition (OWPD). According to the frequency characteristic of border and coastal surveillance sound signals, each signal is decomposed by selective multi-scale wavelet packet decomposition (WPD) and the OWPD tree is obtained. From their high dimension OWPD coefficients, we build the meaningful and compact energy feature vectors, then use them as the input vectors of the BP neural network to classify the border and coastal surveillance sound types. Extensive experimental results show that the classification efficiency is up to 94% using this feature extraction method, improved 6% compared with the method based on WPD.


2009 ◽  
Vol 26 (5-6) ◽  
pp. 477-486 ◽  
Author(s):  
PHILIP M. JAEKL ◽  
LAURENCE R. HARRIS

AbstractWe investigated the effect of auditory–visual sensory integration on visual tasks that were predominantly dependent on parvocellular processing. These tasks were (i) detecting metacontrast-masked targets and (ii) discriminating orientation differences between high spatial frequency Gabor patch stimuli. Sounds that contained no information relevant to either task were presented before, synchronized with, or after the visual targets, and the results were compared to conditions with no sound. Both tasks used a two-alternative forced choice technique. For detecting metacontrast-masked targets, one interval contained the visual target and both (or neither) intervals contained a sound. Sound–target synchrony within 50 ms lowered luminance thresholds for detecting the presence of a target compared to when no sound occurred or when sound onset preceded target onset. Threshold angles for discriminating the orientation of a Gabor patch consistently increased in the presence of a sound. These results are compatible with sound-induced activity in the parvocellular visual pathway increasing the visibility of flashed targets and hindering orientation discrimination.


Author(s):  
Masato Kawanishi ◽  
Ryousuke Maruta ◽  
Norikazu Ikoma ◽  
Hideaki Kawano ◽  
Hiroshi Maeda

Sign in / Sign up

Export Citation Format

Share Document