Bats use a neuronally implemented computational acoustic model to form sonar images

2012 ◽  
Vol 22 (2) ◽  
pp. 311-319 ◽  
Author(s):  
James A Simmons
Keyword(s):  
2012 ◽  
Vol 71 (17) ◽  
pp. 1589-1597 ◽  
Author(s):  
l.Sh. Nevlyudov ◽  
A.M. Tsimbal ◽  
S.S. Milyutina ◽  
V.Y. Sharkovsky

2019 ◽  
Author(s):  
Masashi Aso ◽  
Shinnosuke Takamichi ◽  
Norihiro Takamune ◽  
Hiroshi Saruwatari

Energies ◽  
2020 ◽  
Vol 13 (8) ◽  
pp. 2048
Author(s):  
Jianfeng Zhu ◽  
Wenguo Luo ◽  
Yuqing Wei ◽  
Cheng Yan ◽  
Yancheng You

The buzz phenomenon of a typical supersonic inlet is analyzed on the basis of numerical simulations and duct acoustic theory. Considering that the choked inlet could be treated as a duct with one end closed, a one-dimensional (1D) mathematical model based on the duct acoustic theory is proposed to describe the periodic pressure oscillation of the little buzz and the big buzz. The results of the acoustic model agree well with that of the numerical simulations and the experimental data. It could verify that the dominated oscillation patterns of the little buzz and the big buzz are closely related to the first and second resonant mode of the standing wave, respectively. The discrepancies between the numerical simulation and the ideal acoustic model might be attributed to the viscous damping in the fluid oscillation system. In order to explore the damping, a small perturbation jet is introduced to trigger the resonance of the buzz system and the nonlinear amplification effect of resonance might be helpful to estimate the damping. Through the comparison between the linear acoustic model and the nonlinear simulation, the calculated pressure oscillation damping of the little buzz and the big buzz are 0.33 and 0.16, which could be regarded as an estimation of real damping.


IEEE Access ◽  
2021 ◽  
Vol 9 ◽  
pp. 29416-29428
Author(s):  
Xiaoming Qin ◽  
Xiaowen Luo ◽  
Ziyin Wu ◽  
Jihong Shang

Author(s):  
Ryo Nishikimi ◽  
Eita Nakamura ◽  
Masataka Goto ◽  
Kazuyoshi Yoshii

This paper describes an automatic singing transcription (AST) method that estimates a human-readable musical score of a sung melody from an input music signal. Because of the considerable pitch and temporal variation of a singing voice, a naive cascading approach that estimates an F0 contour and quantizes it with estimated tatum times cannot avoid many pitch and rhythm errors. To solve this problem, we formulate a unified generative model of a music signal that consists of a semi-Markov language model representing the generative process of latent musical notes conditioned on musical keys and an acoustic model based on a convolutional recurrent neural network (CRNN) representing the generative process of an observed music signal from the notes. The resulting CRNN-HSMM hybrid model enables us to estimate the most-likely musical notes from a music signal with the Viterbi algorithm, while leveraging both the grammatical knowledge about musical notes and the expressive power of the CRNN. The experimental results showed that the proposed method outperformed the conventional state-of-the-art method and the integration of the musical language model with the acoustic model has a positive effect on the AST performance.


Sign in / Sign up

Export Citation Format

Share Document