sound sources
Recently Published Documents


TOTAL DOCUMENTS

1386
(FIVE YEARS 330)

H-INDEX

50
(FIVE YEARS 7)

2022 ◽  
Author(s):  
Vladimir Popov ◽  
Dmitry Nechaev ◽  
Alexander Ya. Supin ◽  
Evgeniya Sysueva

Forward masking was investigated by the auditory evoked potentials (AEP) method in a bottlenose dolphin Tursiops truncatus using stimulation by two successive acoustic pulses (the masker and test) projected from spatially separated sources. The positions of the two sound sources either coincided with or were symmetrical relative to the head axis at azimuths from 0 to ±90°. AEPs were recorded either from the vertex or from the lateral head surface next to the auditory meatus. In the last case, the test source was ipsilateral to the recording side, whereas the masker source was either ipsi- or contralateral. For lateral recording, AEP release from masking (recovery) was slower for the ipsi- than for the contralateral masker source position. For vertex recording, AEP recovery was equal both for the coinciding positions of the masker and test sources and for their symmetrical positions relative to the head axis. The data indicate that at higher levels of the auditory system of the dolphin, binaural convergence makes the forward masking nearly equal for ipsi- and contralateral positions of the masker and test.


Acoustics ◽  
2022 ◽  
Vol 4 (1) ◽  
pp. 14-25
Author(s):  
Hsiao Mun Lee ◽  
Heow Pueh Lee ◽  
Zhiyang Liu

The quality of the acoustic environments at Xi’an Jiatong-Liverpool University (XJTLU) and Soochow University (Dushuhu Campus, SUDC) in Suzhou City were investigated in the present work through real-time noise level measurements and questionnaire surveys. Before commencing the measurements and surveys, these two campuses’ sound sources were summarized and classified into four categories through on-site observation: human-made, machinery, living creatures, and natural physical sounds. For the zones near the main traffic road, with a high volume of crowds and surrounded by a park, sound from road vehicles, humans talking, and birds/insects were selected by the interviewees as the major sound sources, respectively. Only zone 3 (near to a park) at XJTLU could be classified as A zone (noise level < 55 dBA) with an excellent quality acoustical environment. All other zones had either good or average quality acoustical environments, except zone 1 (near to main traffic road) at XJTLU, with a fair-quality acoustical environment.


2022 ◽  
Vol 185 ◽  
pp. 108375
Author(s):  
Bartłomiej Kukulski ◽  
Tadeusz Wszołek
Keyword(s):  

2021 ◽  
Author(s):  
Bi-Chun Dong ◽  
Run-Mei Zhang ◽  
Bin Yuan ◽  
Chuan-Yang Yu

Abstract Nearfield acoustic holography in a moving medium is a technique which is typically suitable for sound sources identification in a flow. In the process of sound field reconstruction, sound pressure is usually used as the input, but it may contain considerable background noise due to the interactions between microphones and flow moving at a high velocity. To avoid this problem, particle velocity is an alternative input, which can be obtained by using Laser Doppler Velocimetry in a non-intrusive way. However, there is a singular problem in the conventional propagator relating the particle velocity to the pressure, and it could lead to significant errors or even false results. In view of this, in this paper nonsingular propagators are deduced to realize accurate reconstruction in both cases that the hologram is parallel to and perpendicular to the flow direction. The advantages of the proposed method are analyzed, and simulations are conducted to verify the validation. The results show that the method can overcome the singular problem effectively, and the reconstruction errors are at a low level for different flow velocities, frequencies, and signal-to-noise ratios.


2021 ◽  
Author(s):  
Guus C. van Bentum ◽  
John Van Opstal ◽  
Marc Mathijs van Wanrooij

Sound localization and identification are challenging in acoustically rich environments. The relation between these two processes is still poorly understood. As natural sound-sources rarely occur exactly simultaneously, we wondered whether the auditory system could identify ('what') and localize ('where') two spatially separated sounds with synchronous onsets. While listeners typically report hearing a single source at an average location, one study found that both sounds may be accurately localized if listeners are explicitly being told two sources exist. We here tested whether simultaneous source identification (one vs. two) and localization is possible, by letting listeners choose to make either one or two head-orienting saccades to the perceived location(s). Results show that listeners could identify two sounds only when presented on different sides of the head, and that identification accuracy increased with their spatial separation. Notably, listeners were unable to accurately localize either sound, irrespective of whether one or two sounds were identified. Instead, the first (or only) response always landed near the average location, while second responses were unrelated to the targets. We conclude that localization of synchronous sounds in the absence of prior information is impossible. We discuss that the putative cortical 'what' pathway may not transmit relevant information to the 'where' pathway. We examine how a broadband interaural correlation cue could help to correctly identify the presence of two sounds without being able to localize them. We propose that the persistent averaging behavior reveals that the 'where' system intrinsically assumes that synchronous sounds originate from a single source.


2021 ◽  
Vol 15 ◽  
Author(s):  
Xuexin Tian ◽  
Yimeng Liu ◽  
Zengzhi Guo ◽  
Jieqing Cai ◽  
Jie Tang ◽  
...  

Sound localization is an essential part of auditory processing. However, the cortical representation of identifying the direction of sound sources presented in the sound field using functional near-infrared spectroscopy (fNIRS) is currently unknown. Therefore, in this study, we used fNIRS to investigate the cerebral representation of different sound sources. Twenty-five normal-hearing subjects (aged 26 ± 2.7, male 11, female 14) were included and actively took part in a block design task. The test setup for sound localization was composed of a seven-speaker array spanning a horizontal arc of 180° in front of the participants. Pink noise bursts with two intensity levels (48 dB/58 dB) were randomly applied via five loudspeakers (–90°/–30°/–0°/+30°/+90°). Sound localization task performances were collected, and simultaneous signals from auditory processing cortical fields were recorded for analysis by using a support vector machine (SVM). The results showed a classification accuracy of 73.60, 75.60, and 77.40% on average at –90°/0°, 0°/+90°, and –90°/+90° with high intensity, and 70.60, 73.6, and 78.6% with low intensity. The increase of oxyhemoglobin was observed in the bilateral non-primary auditory cortex (AC) and dorsolateral prefrontal cortex (dlPFC). In conclusion, the oxyhemoglobin (oxy-Hb) response showed different neural activity patterns between the lateral and front sources in the AC and dlPFC. Our results may serve as a basic contribution for further research on the use of fNIRS in spatial auditory studies.


Author(s):  
Henri Pöntynen ◽  
Nelli Salminen

AbstractSpatial hearing facilitates the perceptual organization of complex soundscapes into accurate mental representations of sound sources in the environment. Yet, the role of binaural cues in auditory scene analysis (ASA) has received relatively little attention in recent neuroscientific studies employing novel, spectro-temporally complex stimuli. This may be because a stimulation paradigm that provides binaurally derived grouping cues of sufficient spectro-temporal complexity has not yet been established for neuroscientific ASA experiments. Random-chord stereograms (RCS) are a class of auditory stimuli that exploit spectro-temporal variations in the interaural envelope correlation of noise-like sounds with interaurally coherent fine structure; they evoke salient auditory percepts that emerge only under binaural listening. Here, our aim was to assess the usability of the RCS paradigm for indexing binaural processing in the human brain. To this end, we recorded EEG responses to RCS stimuli from 12 normal-hearing subjects. The stimuli consisted of an initial 3-s noise segment with interaurally uncorrelated envelopes, followed by another 3-s segment, where envelope correlation was modulated periodically according to the RCS paradigm. Modulations were applied either across the entire stimulus bandwidth (wideband stimuli) or in temporally shifting frequency bands (ripple stimulus). Event-related potentials and inter-trial phase coherence analyses of the EEG responses showed that the introduction of the 3- or 5-Hz wideband modulations produced a prominent change-onset complex and ongoing synchronized responses to the RCS modulations. In contrast, the ripple stimulus elicited a change-onset response but no response to ongoing RCS modulation. Frequency-domain analyses revealed increased spectral power at the fundamental frequency and the first harmonic of wideband RCS modulations. RCS stimulation yields robust EEG measures of binaurally driven auditory reorganization and has potential to provide a flexible stimulation paradigm suitable for isolating binaural effects in ASA experiments.


2021 ◽  
Vol 39 (2) ◽  
pp. 145-159
Author(s):  
Laure-Hélène Canette ◽  
Philippe Lalitte ◽  
Barbara Tillmann ◽  
Emmanuel Bigand

Conceptual priming studies have shown that listening to musical primes triggers semantic activation. The present study further investigated with a free semantic evocation task, 1) how rhythmic vs. textural structures affect the amount of words evoked after a musical sequence, and 2) whether both features also affect the content of the semantic activation. Rhythmic sequences were composed of various percussion sounds with a strong underlying beat and metrical structure. Textural sound sequences consisted of blended timbres and sound sources evolving over time without identifiable pulse. Participants were asked to verbalize the concepts evoked by the musical sequences. We measured the number of words and lemmas produced after having listened to musical sequences of each condition, and we analyzed whether specific concepts were associated with each sequence type. Results showed that more words and lemmas were produced for textural sound sequences than for rhythmic sequences and that some concepts were specifically associated with each musical condition. Our findings suggest that listening to musical excerpts emphasizing different features influences semantic activation in different ways and extent. This might possibly be instantiated via cognitive mechanisms triggered by the acoustic characteristics of the excerpts as well as the perceived emotions.


Author(s):  
Isao Tokuda

In the source-filter theory, the mechanism of speech production is described as a two-stage process: (a) The air flow coming from the lungs induces tissue vibrations of the vocal folds (i.e., two small muscular folds located in the larynx) and generates the “source” sound. Turbulent airflows are also created at the glottis or at the vocal tract to generate noisy sound sources. (b) Spectral structures of these source sounds are shaped by the vocal tract “filter.” Through the filtering process, frequency components corresponding to the vocal tract resonances are amplified, while the other frequency components are diminished. The source sound mainly characterizes the vocal pitch (i.e., fundamental frequency), while the filter forms the timbre. The source-filter theory provides a very accurate description of normal speech production and has been applied successfully to speech analysis, synthesis, and processing. Separate control of the source (phonation) and the filter (articulation) is advantageous for acoustic communications, especially for human language, which requires expression of various phonemes realized by a flexible maneuver of the vocal tract configuration. Based on this idea, the articulatory phonetics focuses on the positions of the vocal organs to describe the produced speech sounds. The source-filter theory elucidates the mechanism of “resonance tuning,” that is, a specialized way of singing. To increase efficiency of the vocalization, soprano singers adjust the vocal tract filter to tune one of the resonances to the vocal pitch. Consequently, the main source sound is strongly amplified to produce a loud voice, which is well perceived in a large concert hall over the orchestra. It should be noted that the source–filter theory is based upon the assumption that the source and the filter are independent from each other. Under certain conditions, the source and the filter interact with each other. The source sound is influenced by the vocal tract geometry and by the acoustic feedback from the vocal tract. Such source–filter interaction induces various voice instabilities, for example, sudden pitch jump, subharmonics, resonance, quenching, and chaos.


Author(s):  
Rongjiang Tang ◽  
Yingxiang zuo ◽  
Weiya Liu ◽  
Liguo Tang ◽  
Weiguang Zheng ◽  
...  

Abstract In this paper, we propose a compressed sensing (CS) sound source localization algorithm based on signal energy to solve the problem of stopping iteration condition of orthogonal matching pursuit reconstruction algorithm in compressed sensing. The orthogonal matching tracking algorithm needs to stop iteration according to the number of sound sources or the change of residual. Generally, the number of sound sources cannot be known in advance, and the residual often leads to unnecessary calculation. Because the sound source is sparsely distributed in space, and its energy is concentrated and higher than that of the environmental noise, the comparison of the signal energy at different positions in each iteration reconstruction signal is used to determine whether the new sound source is added in this iteration. At the same time, the block sparsity is introduced by using multiple frequency points to avoid the problem of different iteration times of different frequency points in the same frame caused by the uneven energy distribution in the signal frequency domain. Simulation and experimental results show that the proposed algorithm retains the advantages of the orthogonal matching tracking sound source localization algorithm, and can complete the iteration well. Under the premise of not knowing the number of sound sources, the maximum error between the number of iterations and the set number of sound sources is 0.31.


Sign in / Sign up

Export Citation Format

Share Document