scholarly journals Auditory Subjective-Straight-Ahead Blurs during Significantly Slow Passive Body Rotation

i-Perception ◽  
2022 ◽  
Vol 13 (1) ◽  
pp. 204166952110706
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

This paper reports on the deterioration in sound-localization accuracy during listeners’ head and body movements. We investigated the sound-localization accuracy during passive body rotations at speeds in the range of 0.625–5 °/s. Participants were asked to determine whether a 30-ms noise stimuli emerged relative to their subjective-straight-ahead reference. Results indicated that the sound-localization resolution degraded with passive rotation, irrespective of the rotation speed, even at speeds of 0.625 °/s.

2020 ◽  
Vol 41 (1) ◽  
pp. 249-252
Author(s):  
Akio Honda ◽  
Yoji Masumi ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2017 ◽  
Vol 142 (4) ◽  
pp. 2676-2676
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2018 ◽  
Vol 39 (4) ◽  
pp. 305-307 ◽  
Author(s):  
Akio Honda ◽  
Sayaka Tsunokake ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2022 ◽  
Vol 2022 ◽  
pp. 1-11
Author(s):  
Jinfeng Zhang ◽  
Yilei Zhu ◽  
Yalin Li ◽  
Ping Huang ◽  
Hui Xu ◽  
...  

Through numerical simulations, this work analyzes the unsteady pressure pulsation characteristics in new type of dishwasher pump with double tongue volute and single tongue volute, under volute static and rotation conditions. Likewise, the performance tests were also carried out to verify the numerical results. Multiple monitoring points were set at the various positions of new type dishwasher pump to collect the pressure pulsation signals, and the relevant frequency signals were obtained via Fast Fourier Transform, to analyze the influence of volute tongue and its passive speed on the pump performance. The results reveal that when the double tongue volute is stationary, the pressure pulsation amplitudes increase from the impeller inlet to the impeller outlet. Under the influence of shedding vortex, the pressure pulsation in the lateral region of tongue becomes disorganized, and the main frequency of pressure pulsation changes from blade frequency to shaft frequency. In addition, compared with the static volute, double tongue volute can effectively guide the water flow out of the tongue during the rotation process, thus ensuring good periodicity for pressure pulsation in the tongue region. Accordingly, a volute reference scheme with passive rotation speed is proposed in this study, which can effectively improve the pressure pulsation at tongue position, and provides a new idea for rotor-stator interference to guide the innovation of dishwasher.


Acta Acustica ◽  
2020 ◽  
Vol 5 ◽  
pp. 3
Author(s):  
Aida Hejazi Nooghabi ◽  
Quentin Grimal ◽  
Anthony Herrel ◽  
Michael Reinwald ◽  
Lapo Boschi

We implement a new algorithm to model acoustic wave propagation through and around a dolphin skull, using the k-Wave software package [1]. The equation of motion is integrated numerically in a complex three-dimensional structure via a pseudospectral scheme which, importantly, accounts for lateral heterogeneities in the mechanical properties of bone. Modeling wave propagation in the skull of dolphins contributes to our understanding of how their sound localization and echolocation mechanisms work. Dolphins are known to be highly effective at localizing sound sources; in particular, they have been shown to be equally sensitive to changes in the elevation and azimuth of the sound source, while other studied species, e.g. humans, are much more sensitive to the latter than to the former. A laboratory experiment conducted by our team on a dry skull [2] has shown that sound reverberated in bones could possibly play an important role in enhancing localization accuracy, and it has been speculated that the dolphin sound localization system could somehow rely on the analysis of this information. We employ our new numerical model to simulate the response of the same skull used by [2] to sound sources at a wide and dense set of locations on the vertical plane. This work is the first step towards the implementation of a new tool for modeling source (echo)location in dolphins; in future work, this will allow us to effectively explore a wide variety of emitted signals and anatomical features.


2016 ◽  
Vol 140 (4) ◽  
pp. 3269-3269
Author(s):  
Sayaka Tsunokake ◽  
Akio Honda ◽  
Yôiti Suzuki ◽  
Shuichi Sakamoto

2019 ◽  
Vol 23 ◽  
pp. 233121651984387 ◽  
Author(s):  
Stefan Zirn ◽  
Julian Angermeier ◽  
Susan Arndt ◽  
Antje Aschendorff ◽  
Thomas Wesarg

In users of a cochlear implant (CI) together with a contralateral hearing aid (HA), so-called bimodal listeners, differences in processing latencies between digital HA and CI up to 9 ms constantly superimpose interaural time differences. In the present study, the effect of this device delay mismatch on sound localization accuracy was investigated. For this purpose, localization accuracy in the frontal horizontal plane was measured with the original and minimized device delay mismatch. The reduction was achieved by delaying the CI stimulation according to the delay of the individually worn HA. For this, a portable, programmable, battery-powered delay line based on a ring buffer running on a microcontroller was designed and assembled. After an acclimatization period to the delayed CI stimulation of 1 hr, the nine bimodal study participants showed a highly significant improvement in localization accuracy of 11.6% compared with the everyday situation without the delay line ( p < .01). Concluding, delaying CI stimulation to minimize the device delay mismatch seems to be a promising method to increase sound localization accuracy in bimodal listeners.


2003 ◽  
Vol 90 (1) ◽  
pp. 271-290 ◽  
Author(s):  
Jefferson E. Roy ◽  
Kathleen E. Cullen

Eye-head (EH) neurons within the medial vestibular nuclei are thought to be the primary input to the extraocular motoneurons during smooth pursuit: they receive direct projections from the cerebellar flocculus/ventral paraflocculus, and in turn, project to the abducens motor nucleus. Here, we recorded from EH neurons during head-restrained smooth pursuit and head-unrestrained combined eye-head pursuit (gaze pursuit). During head-restrained smooth pursuit of sinusoidal and step-ramp target motion, each neuron's response was well described by a simple model that included resting discharge (bias), eye position, and velocity terms. Moreover, eye acceleration, as well as eye position, velocity, and acceleration error (error = target movement – eye movement) signals played no role in shaping neuronal discharges. During head-unrestrained gaze pursuit, EH neuron responses reflected the summation of their head-movement sensitivity during passive whole-body rotation in the dark and gaze-movement sensitivity during smooth pursuit. Indeed, EH neuron responses were well predicted by their head- and gaze-movement sensitivity during these two paradigms across conditions (e.g., combined eye-head gaze pursuit, smooth pursuit, whole-body rotation in the dark, whole-body rotation while viewing a target moving with the head (i.e., cancellation), and passive rotation of the head-on-body). Thus our results imply that vestibular inputs, but not the activation of neck proprioceptors, influence EH neuron responses during head-on-body movements. This latter proposal was confirmed by demonstrating a complete absence of modulation in the same neurons during passive rotation of the monkey's body beneath its neck. Taken together our results show that during gaze pursuit EH neurons carry vestibular- as well as gaze-related information to extraocular motoneurons. We propose that this vestibular-related modulation is offset by inputs from other premotor inputs, and that the responses of vestibuloocular reflex interneurons (i.e., position-vestibular-pause neurons) are consistent with such a proposal.


2001 ◽  
Vol 85 (6) ◽  
pp. 2455-2460 ◽  
Author(s):  
Paul DiZio ◽  
Richard Held ◽  
James R. Lackner ◽  
Barbara Shinn-Cunningham ◽  
Nathaniel Durlach

We measured the influence of gravitoinertial force (GIF) magnitude and direction on head-centric auditory localization to determine whether a true audiogravic illusion exists. In experiment 1, supine subjects adjusted computer-generated dichotic stimuli until they heard a fused sound straight ahead in the midsagittal plane of the head under a variety of GIF conditions generated in a slow-rotation room. The dichotic stimuli were constructed by convolving broadband noise with head-related transfer function pairs that model the acoustic filtering at the listener's ears. These stimuli give rise to the perception of externally localized sounds. When the GIF was increased from 1 to 2 g and rotated 60° rightward relative to the head and body, subjects on average set an acoustic stimulus 7.3° right of their head's median plane to hear it as straight ahead. When the GIF was doubled and rotated 60° leftward, subjects set the sound 6.8° leftward of baseline values to hear it as centered. In experiment 2, increasing the GIF in the median plane of the supine body to 2 g did not influence auditory localization. In experiment 3, tilts up to 75° of the supine body relative to the normal 1 g GIF led to small shifts, 1–2°, of auditory setting toward the up ear to maintain a head-centered sound localization. These results show that head-centric auditory localization is affected by azimuthal rotation and increase in magnitude of the GIF and demonstrate that an audiogravic illusion exists. Sound localization is shifted in the direction opposite GIF rotation by an amount related to the magnitude of the GIF and its angular deviation relative to the median plane.


2013 ◽  
Vol 109 (6) ◽  
pp. 1658-1668 ◽  
Author(s):  
Daniel J. Tollin ◽  
Janet L. Ruhland ◽  
Tom C. T. Yin

Sound localization along the azimuthal dimension depends on interaural time and level disparities, whereas localization in elevation depends on broadband power spectra resulting from the filtering properties of the head and pinnae. We trained cats with their heads unrestrained, using operant conditioning to indicate the apparent locations of sounds via gaze shift. Targets consisted of broadband (BB), high-pass (HP), or low-pass (LP) noise, tones from 0.5 to 14 kHz, and 1/6 octave narrow-band (NB) noise with center frequencies ranging from 6 to 16 kHz. For each sound type, localization performance was summarized by the slope of the regression relating actual gaze shift to desired gaze shift. Overall localization accuracy for BB noise was comparable in azimuth and in elevation but was markedly better in azimuth than in elevation for sounds with limited spectra. Gaze shifts to targets in azimuth were most accurate to BB, less accurate for HP, LP, and NB sounds, and considerably less accurate for tones. In elevation, cats were most accurate in localizing BB, somewhat less accurate to HP, and less yet to LP noise (although still with slopes ∼0.60), but they localized NB noise much worse and were unable to localize tones. Deterioration of localization as bandwidth narrows is consistent with the hypothesis that spectral information is critical for sound localization in elevation. For NB noise or tones in elevation, unlike humans, most cats did not have unique responses at different frequencies, and some appeared to respond with a “default” location at all frequencies.


Sign in / Sign up

Export Citation Format

Share Document