sound direction
Recently Published Documents


TOTAL DOCUMENTS

71
(FIVE YEARS 9)

H-INDEX

14
(FIVE YEARS 1)

Sensors ◽  
2021 ◽  
Vol 22 (1) ◽  
pp. 181
Author(s):  
Chen-Jun She ◽  
Xie-Feng Cheng ◽  
Kai Wang

In this paper, the graphic representation method is used to study the multiple characteristics of heart sounds from a resting state to a state of motion based on single- and four-channel heart-sound signals. Based on the concept of integration, we explore the representation method of heart sound and blood pressure during motion. To develop a single- and four-channel heart-sound collector, we propose new concepts such as a sound-direction vector of heart sound, a motion–response curve of heart sound, the difference value, and a state-change-trend diagram. Based on the acoustic principle, the reasons for the differences between multiple-channel heart-sound signals are analyzed. Through a comparative analysis of four-channel motion and resting-heart sounds, from a resting state to a state of motion, the maximum and minimum similarity distances in the corresponding state-change-trend graphs were found to be 0.0038 and 0.0006, respectively. In addition, we provide several characteristic parameters that are both sensitive (such as heart sound amplitude, blood pressure, systolic duration, and diastolic duration) and insensitive (such as sound-direction vector, state-change-trend diagram, and difference value) to motion, thus providing a new technique for the diverse analysis of heart sounds in motion.


2021 ◽  
Vol 263 (2) ◽  
pp. 4581-4591
Author(s):  
Keishi Sakoda ◽  
Ichro Yamada ◽  
Kenji Shinohara

The authors have developed a sound direction detection method based on the cross-correlation method and applied it to automatic monitoring of aircraft noise and identification of sound sources. As aircraft performance improves, noise decreases, and people are interested in and dissatisfied with low-level noise aircraft, especially in urban areas where environmental noise and aircraft noise combine to complicate the acoustic environment. Therefore, it is necessary to monitor and to measure not only aircraft noise but also environmental noise. Since our surveillance is aircraft noise, it is important to analyze noise exposure from acoustic information rather than trucks or images. In this report, we will look back on the development process of this sound direction detection technology, show examples of helicopters and application examples of acoustic scene analysis to high-altitude aircraft, and consider the latest situation realized as acoustic environment monitoring. We believe that this analysis will make it easier to understand the noise exposure situation at the noise monitoring station. It also describes the future outlook for this method.


Author(s):  
Jakob Christensen-Dalsgaard ◽  
Paula T. Kuokkanen ◽  
Jamie Emoto Matthews ◽  
Catherine E. Carr

The configuration of lizard ears, where sound can reach both surfaces of the eardrums, produces a strongly directional ear, but the subsequent processing of sound direction by the auditory pathway is unknown. We report here on directional responses from the first stage, the auditory nerve. We used laser vibrometry to measure eardrum responses in Tokay geckos, and in the same animals recorded 117 auditory nerve single fiber responses to free-field sound from radially distributed speakers. Responses from all fibers showed strongly lateralized activity at all frequencies, with an ovoidal directivity that resembled the eardrum directivity. Geckos are vocal and showed pronounced nerve fiber directionality to components of the call. To estimate the accuracy with which a gecko could discriminate between sound sources, we computed the Fisher information (FI) for each neuron. FI was highest just contralateral to the midline, front and back. Thus, the auditory nerve could provide a population code for sound source direction, and geckos should have a high capacity to differentiate between midline sound sources. In brain, binaural comparisons, for example by IE neurons, should sharpen the lateralized responses and extend the dynamic range of directionality.


AI ◽  
2020 ◽  
Vol 1 (4) ◽  
pp. 487-509
Author(s):  
Sudarshan Ramenahalli

The natural environment and our interaction with it are essentially multisensory, where we may deploy visual, tactile and/or auditory senses to perceive, learn and interact with our environment. Our objective in this study is to develop a scene analysis algorithm using multisensory information, specifically vision and audio. We develop a proto-object-based audiovisual saliency map (AVSM) for the analysis of dynamic natural scenes. A specialized audiovisual camera with 360∘ field of view, capable of locating sound direction, is used to collect spatiotemporally aligned audiovisual data. We demonstrate that the performance of a proto-object-based audiovisual saliency map in detecting and localizing salient objects/events is in agreement with human judgment. In addition, the proto-object-based AVSM that we compute as a linear combination of visual and auditory feature conspicuity maps captures a higher number of valid salient events compared to unisensory saliency maps. Such an algorithm can be useful in surveillance, robotic navigation, video compression and related applications.


2020 ◽  
Vol 30 (1) ◽  
pp. 209-223
Author(s):  
Zhuhe Wang ◽  
Nan Li ◽  
Tao Wu ◽  
Haoxuan Zhang ◽  
Tao Feng

Abstract In recent years, more and more people are applying Convolutional Neural Networks to the study of sound signals. The main reason is the translational invariance of convolution in time and space. Thereby the diversity of the sound signal can be overcome. However, in terms of sound direction recognition, there are also problems such as a microphone matrix being too large, and feature selection. This paper proposes a sound direction recognition using a simulated human head with microphones at both ears. Theoretically, the two microphones cannot distinguish the front and rear directions. However, we use the original data of the two channels as the input of the convolutional neural network, and the resolution effect can reach more than 0.9. For comparison, we also chose the delay feature (GCC) for sound direction recognition. Finally, we also conducted experiments that used probability distributions to identify more directions.


2019 ◽  
Vol 237 (12) ◽  
pp. 3221-3231 ◽  
Author(s):  
Takumi Mieda ◽  
Masahiro Kokubu ◽  
Mayumi Saito

Sensors ◽  
2019 ◽  
Vol 19 (16) ◽  
pp. 3469
Author(s):  
Chien-Chang Huang ◽  
Chien-Hao Liu

In this research, we proposed a miniaturized two-element sensor array inspired by Ormia Ochracea for sound direction finding applications. In contrast to the convectional approach of using mechanical coupling structures for enlarging the intensity differences, we exploited an electrical coupling network circuit composed of lumped elements to enhance the phase differences and extract the optimized output power for good signal-to-noise ratio. The separation distance between two sensors could be reduced from 0.5 wavelength to 0.1 wavelength 3.43 mm at the operation frequency of 10 kHz) for determining the angle of arrivals. The main advantages of the proposed device include low power losses, flexible designs, and wide operation bandwidths. A prototype was designed, fabricated, and experiments examined within a sound anechoic chamber. It was demonstrated that the proposed device had a phase enhancement of 110 ∘ at the incident angle of 90 ∘ and the normalized power level of −2.16 dB at both output ports. The received power levels of our device were 3 dB higher than those of the transformer-type direction-finding system. In addition, our proposed device could operate in the frequency range from 8 kHz to 12 kHz with a tunable capacitor. The research results are expected to be beneficial for the compact sonar or radar systems.


2018 ◽  
Vol 1107 ◽  
pp. 072002
Author(s):  
Wenhui Dong ◽  
Chunyu Yu ◽  
Mei Zhibin
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document