sound generation
Recently Published Documents


TOTAL DOCUMENTS

659
(FIVE YEARS 79)

H-INDEX

37
(FIVE YEARS 4)

2021 ◽  
Vol 933 ◽  
Author(s):  
Hamid Daryan ◽  
Fazle Hussain ◽  
Jean-Pierre Hickey

We study the sound generation mechanism of initially subsonic viscous vortex reconnection at vortex Reynolds number $Re~(\equiv \text {circulation}/\text {kinematic viscosity})=1500$ through decomposition of Lighthill's acoustic source term. The Laplacian of the kinetic energy, flexion product, enstrophy and deviation from the isentropic condition provide the dominant contributions to the acoustic source term. The overall (all time) extrema of the total source term and its dominant hydrodynamic components scale linearly with the reference Mach number $M_o$ ; the deviation from the isentropic condition shows a quadratic scaling. The significant sound arising from the flexion product occurs due to the coiling and uncoiling of the twisted vortex filaments wrapping around the bridges, when a rapid strain is induced on the filaments by the repulsion of the bridges. The spatial distributions of the various acoustic source terms reveal the importance of mutual cancellations among most of the terms; this also highlights the importance of symmetry breaking in the sound generation during reconnection. Compressibility acts to delay the start of the sequence of reconnection events, as long as shocklets, if formed, are sufficiently weak to not affect the reconnection. The delayed onset has direct ramifications for the sound generation by enhancing the velocity of the entrained jet between the vortices and increasing the spatial gradients of the acoustic source terms. Consistent with the near-field pressure, the overall maximum instantaneous sound pressure level in the far field has a quadratic dependence on $M_o$ . Thus, reconnection becomes an even more dominant sound-generating event at higher $M_o$ .


2021 ◽  
Vol 4 ◽  
pp. 48-51
Author(s):  
Semen Gorokhovskyi ◽  
Artem Laiko

Euclidean algorithm is known by humanity for more than two thousand years. During this period many applications for it were found, covering different disciplines and music is one of those. Such algorithm application in music first appeared in 2005 when researchers found a correlation between world music rhythm and the Euclidean algorithm result, defining Euclidean rhythms as the concept.In the modern world, music could be created using many approaches. The first one being the simple analogue, the analogue signal is just a sound wave that emitted due to vibration of a certain medium, the one that is being recorded onto a computer hard drive or other digital storage called digital and has methods of digital signal processing applied. Having the ability to convert the analogue signal or create and modulate digital sounds creates a lot of possibilities for sound design and production, where sonic characteristics were never accessible because of limitations in sound development by the analogue devices or instruments, nowadays become true. Sound generation process, which usually consists of modulating waveform and frequency and can be influenced by many factors like oscillation, FX pipeline and so on. The programs that influence synthesised or recorded signal called VST plugins and they are utilising the concepts of digital signal processing.This paper aims to research the possible application of Euclidean rhythms and integrate those in the sound generation process by creating a VST plugin that oscillates incoming signal with one of the four basic wave shapes in order to achieve unique sonic qualities. The varying function allows modulation with one out of four basic wave shapes such as sine, triangle, square and sawtooth, depending on the value received from the Euclidean rhythm generator, switching modulating functions introduces subharmonics, with the resulting richer and tighter sound which could be seen on the spectrograms provided in the publication.


Author(s):  
Dominik Mayrhofer ◽  
Manfred Kaltenbacher

In this paper, we consider the general idea of Digital Sound Reconstruction (DSR) and analyze its inherent limitations. Based on this discussion, a new method which we call Advanced Digital Sound Reconstruction (ADSR) is introduced and analyzed in detail. This method aims to overcome the problems of classical DSR by introducing shutter gates and focuses on sound generation in the low-frequency domain. Combining the idea of classical DSR with a redirection mechanism leads to a gain of 20[Formula: see text]dB per decade regarding the sound pressure for decreasing frequency values. We present multiple array designs and possible embodiments for ADSR as well as an in depth view of excitation and optimization approaches. Finally, numerical investigations are used in order to demonstrate the potential of ADSR especially in the mid- to low-frequency range.


2021 ◽  
Vol 33 (11) ◽  
pp. 4057
Author(s):  
Tias Kurniati ◽  
Chuan-Kai Yang ◽  
Tzer-Shyong Chen ◽  
Yu-Fang Chung ◽  
Yu-Min Huang ◽  
...  

2021 ◽  
Vol 150 (5) ◽  
pp. 3485-3499
Author(s):  
Alexander Lodermeyer ◽  
Eman Bagheri ◽  
Stefan Kniesburges ◽  
Christoph Näger ◽  
Judith Probst ◽  
...  
Keyword(s):  

2021 ◽  
Author(s):  
Xubo Liu ◽  
Turab Iqbal ◽  
Jinzheng Zhao ◽  
Qiushi Huang ◽  
Mark D. Plumbley ◽  
...  

Author(s):  
Xuliang Liu ◽  
Yong Luo ◽  
Shuhai Zhang ◽  
Hu Li ◽  
Zhaolin Fan ◽  
...  

2021 ◽  
Vol 17 (9) ◽  
pp. e1009361
Author(s):  
Mehrdad Shahmohammadi ◽  
Hongxing Luo ◽  
Philip Westphal ◽  
Richard N. Cornelussen ◽  
Frits W. Prinzen ◽  
...  

We propose a novel, two-degree of freedom mathematical model of mechanical vibrations of the heart that generates heart sounds in CircAdapt, a complete real-time model of the cardiovascular system. Heart sounds during rest, exercise, biventricular (BiVHF), left ventricular (LVHF) and right ventricular heart failure (RVHF) were simulated to examine model functionality in various conditions. Simulated and experimental heart sound components showed both qualitative and quantitative agreements in terms of heart sound morphology, frequency, and timing. Rate of left ventricular pressure (LV dp/dtmax) and first heart sound (S1) amplitude were proportional with exercise level. The relation of the second heart sound (S2) amplitude with exercise level was less significant. BiVHF resulted in amplitude reduction of S1. LVHF resulted in reverse splitting of S2 and an amplitude reduction of only the left-sided heart sound components, whereas RVHF resulted in a prolonged splitting of S2 and only a mild amplitude reduction of the right-sided heart sound components. In conclusion, our hemodynamics-driven mathematical model provides fast and realistic simulations of heart sounds under various conditions and may be helpful to find new indicators for diagnosis and prognosis of cardiac diseases. New & noteworthy To the best of our knowledge, this is the first hemodynamic-based heart sound generation model embedded in a complete real-time computational model of the cardiovascular system. Simulated heart sounds are similar to experimental and clinical measurements, both quantitatively and qualitatively. Our model can be used to investigate the relationships between heart sound acoustic features and hemodynamic factors/anatomical parameters.


2021 ◽  
Vol 11 (16) ◽  
pp. 7546
Author(s):  
Katashi Nagao ◽  
Kaho Kumon ◽  
Kodai Hattori

In building-scale VR, where the entire interior of a large-scale building is a virtual space that users can walk around in, it is very important to handle movable objects that actually exist in the real world and not in the virtual space. We propose a mechanism to dynamically detect such objects (that are not embedded in the virtual space) in advance, and then generate a sound when one is hit with a virtual stick. Moreover, in a large indoor virtual environment, there may be multiple users at the same time, and their presence may be perceived by hearing, as well as by sight, e.g., by hearing sounds such as footsteps. We, therefore, use a GAN deep learning generation system to generate the impact sound from any object. First, in order to visually display a real-world object in virtual space, its 3D data is generated using an RGB-D camera and saved, along with its position information. At the same time, we take the image of the object and break it down into parts, estimate its material, generate the sound, and associate the sound with that part. When a VR user hits the object virtually (e.g., hits it with a virtual stick), a sound is generated. We demonstrate that users can judge the material from the sound, thus confirming the effectiveness of the proposed method.


Sign in / Sign up

Export Citation Format

Share Document