sound pulses
Recently Published Documents


TOTAL DOCUMENTS

152
(FIVE YEARS 16)

H-INDEX

16
(FIVE YEARS 1)

2022 ◽  
Vol 2159 (1) ◽  
pp. 012009
Author(s):  
J E Camargo-Chávez ◽  
S Arceo-Díaz ◽  
E E Bricio-Barrios ◽  
R E Chávez-Valdez

Abstract Emerging technologies are efficient alternatives for satisfying the growing demand for sustainable and cheap energy sources. Piezoelectrics are one of the most promising energy sources derived from emerging technologies. These materials are capable of converting mechanical energy into electricity or vice versa. Piezoelectrics have been used for almost a hundred years to generate electrical and sound pulses. However, the use of piezoelectrics for power generation is constrained by the cost associated with equipment and infrastructure. This problem has been addressed through mathematical models that relate the physical and electrical properties of the piezoelectric material with the voltage generated. Although these models have high performance, they do not incorporate voltage rectification and electrical charge storage stages. This work presents a mathematical model that describes the relationship of the physical and electromechanical properties of a system employing a piezoelectric for energy generation. The voltage of the system and the charge stored in a capacitor are calculated through this model. Also, contour diagrams are presented as a tool for facilitating the efficiency of energy generation.


2021 ◽  
Vol 2119 (1) ◽  
pp. 012066
Author(s):  
I A Ogorodnikov

Abstract The analysis of the influence of a thin homogeneous bubble layer on sound emission from a solid surface is carried out. Sound pulses and monochromatic wave packets with a carrier frequency equal to the resonant frequency of the bubbles forming the bubble layer are considered. It is shown that the bubble layer transforms short sound pulses into wave sound packets and significantly reduces the amplitude of the emitted sound. The structure of a sinusoidal wave packet is transformed similarly. A long sound pulse is stored in the form of a pulse, its shape changes significantly. A homogeneous bubble layer near a solid radiating surface is an open resonator. The layer generates far-field radiation with spectral lines depending on the method of layer excitation and the internal properties of the bubble layer. The resonant frequency of the bubble is the limiting frequency in the spectrum, but it is not distinguished by a separate line.


2021 ◽  
Vol 11 (1) ◽  
Author(s):  
Ji-Ho Chang ◽  
Il Doh

AbstractThis paper proposes a method that automatically measures non-invasive blood pressure (BP) based on an auscultatory approach using Korotkoff sounds (K-sounds). There have been methods utilizing K-sounds that were more accurate in general than those using cuff pressure signals only under well-controlled environments, but most were vulnerable to the measurement conditions and to external noise because blood pressure is simply determined based on threshold values in the sound signal. The proposed method enables robust and precise BP measurements by evaluating the probability that each sound pulse is an audible K-sound based on a deep learning using a convolutional neural network (CNN). Instead of classifying sound pulses into two categories, audible K-sounds and others, the proposed CNN model outputs probability values. These values in a Korotkoff cycle are arranged in time order, and the blood pressure is determined. The proposed method was tested with a dataset acquired in practice that occasionally contains considerable noise, which can degrade the performance of the threshold-based methods. The results demonstrate that the proposed method outperforms a previously reported CNN-based classification method using K-sounds. With larger amounts of various types of data, the proposed method can potentially achieve more precise and robust results.


2021 ◽  
Author(s):  
Rose L Tatarsky ◽  
Zilin Guo ◽  
Sarah C Campbell ◽  
Helena Kim ◽  
Wenxuan Fang ◽  
...  

Individuals can reveal their relative competitive ability or mate quality through acoustic communication, varying signals in form and frequency to mediate adaptive interactions including competitive aggression. We report robust acoustic displays during aggressive interactions for a laboratory colony of Danionella dracula, a recently discovered miniature and transparent species of teleost fish closely related to zebrafish (Danio rerio). Males produce bursts of pulsatile, click-like sounds and a distinct postural display, extension of a hypertrophied lower jaw, during resident-intruder dyad interactions. Females lack a hypertrophied lower jaw and show no evidence of sound production or jaw extension under such conditions. Novel pairs of size-matched or mismatched males were combined in resident-intruder assays where sound production and jaw extension could be linked to individuals. Resident males produce significantly more sound pulses than intruders in both dyad contexts; larger males are consistently more sonic in size-mismatched pairs. For both conditions, males show a similar pattern of increased jaw extension that frequently coincided with acoustic displays during periods of heightened sonic activity. These studies firmly establish D. dracula as a sound-producing species that modulates both acoustic and postural displays during social interactions based on either residency or body size, thus providing a foundation for investigating the role of these displays in a new model clade for neurogenomic studies of aggression, courtship and other social interactions.


eLife ◽  
2021 ◽  
Vol 10 ◽  
Author(s):  
Elsa Steinfath ◽  
Adrian Palacios-Muñoz ◽  
Julian R Rottschäfer ◽  
Deniz Yuezak ◽  
Jan Clemens

Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, and fast.We here introduce DeepAudioSegmenter (DAS), a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DAS using acoustic signals with diverse characteristics from insects, birds, and mammals. DAS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations. The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DAS annotates song with high throughput and low latency for experimental interventions in realtime. Overall, DAS is a universal, versatile, and accessible tool for annotating acoustic communication signals.


2021 ◽  
Vol 2057 (1) ◽  
pp. 012032
Author(s):  
I A Ogorodnikov

Abstract The effect of changes in the volume concentration of bubbles in the boundary zone of the bubble medium on the nature of reflection and radiation of the excited bubble medium is studied. The spectral characteristics of the radiation of a bubble medium are obtained at the initial stage of transition radiation and at large times when the radiation is stationary. It is shown that in the initial phase the emission spectrum is broadband and is located in the absorption band of the bubble medium, and at large times the emission spectrum is located outside this band.


2021 ◽  
Author(s):  
Vidushi Pathak ◽  
Elsa Juan ◽  
Reina van der Goot ◽  
Lucia Talamini

Study Objective: Sleep is critical for physical and mental health. However, sleep disruption due to noise is a growing problem, causing long-lasting distress and fragilizing entire populations mentally and physically. Here for the first time, we tested an innovative and non-invasive potential countermeasure for sleep disruptions due to noise. Methods: We developed a new, modeling-based, closed-loop acoustic neurostimulation procedure (CLNS) to precisely phase-lock stimuli to slow oscillations (SO). We used CLNS to align, soft sound pulses to the start of the SO positive deflection to boost SO and sleep spindles during non-rapid eye movement (NREM) sleep. Participants underwent three overnight EEG recordings. The first night served to determine each participant ′s individual noise arousal threshold. The remaining two nights occurred in counterbalanced order: in the Disturbing night, loud, real-life noises were repeatedly presented; in the Intervention night, similar loud noises were played while using the CLNS to boost SO. All experimental manipulations were performed in the first three hours of sleep; participants slept undisturbed for the rest of the night. Results: In contrast to the Disturbing night, the probability of arousals caused by noise was significantly decreased in the Intervention night. Moreover, the CLNS intervention increased NREM duration and sleep spindle power across the night. Conclusions: These results show that our CLNS procedure can effectively protect sleep from disruptions caused by noise. Remarkably, even in the presence of loud environmental noise, CLNS ′ soft and precisely timed sound pulses played a beneficial role in protecting sleep continuity. This represents the first successful attempt at using CLNS in a noisy environment.


2021 ◽  
Author(s):  
Elsa Steinfath ◽  
Adrian Palacios ◽  
Julian Rottschaefer ◽  
Deniz Yuezak ◽  
Jan Clemens

Acoustic signals serve communication within and across species throughout the animal kingdom. Studying the genetics, evolution, and neurobiology of acoustic communication requires annotating acoustic signals: segmenting and identifying individual acoustic elements like syllables or sound pulses. To be useful, annotations need to be accurate, robust to noise, fast. We introduce DeepSS, a method that annotates acoustic signals across species based on a deep-learning derived hierarchical presentation of sound. We demonstrate the accuracy, robustness, and speed of DeepSS using acoustic signals with diverse characteristics: courtship song from flies, ultrasonic vocalizations of mice, and syllables with complex spectrotemporal structure from birds. DeepSS comes with a graphical user interface for annotating song, training the network, and for generating and proofreading annotations (available at https://janclemenslab.org/deepss). The method can be trained to annotate signals from new species with little manual annotation and can be combined with unsupervised methods to discover novel signal types. DeepSS annotates song with high throughput and low latency, allowing realtime annotations for closed-loop experimental interventions.


2021 ◽  
Vol 14 (1) ◽  
pp. 1-25
Author(s):  
Ronny Andrade ◽  
Jenny Waycott ◽  
Steven Baker ◽  
Frank Vetere

In virtual environments, spatial information is communicated visually. This prevents people with visual impairment (PVI) from accessing such spaces. In this article, we investigate whether echolocation could be used as a tool to convey spatial information by answering the following research questions: What features of virtual space can be perceived by PVI through the use of echolocation? How does active echolocation support PVI in acquiring spatial knowledge of a virtual space? And what are PVI’s opinions regarding the use of echolocation to acquire landmark and survey knowledge of virtual space? To answer these questions, we conducted a two-part within-subjects experiment with 12 people who were blind or had a visual impairment and found that size and materials of rooms and 90-degree turns were detectable through echolocation, participants preferred using echoes derived from footsteps rather than from artificial sound pulses, and echolocation supported the acquisition of mental maps of a virtual space. Ultimately, we propose that appropriately designed echolocation in virtual environments improves understanding of spatial information and access to digital games for PVI.


Sensors ◽  
2021 ◽  
Vol 21 (1) ◽  
pp. 228
Author(s):  
Idan Fishel ◽  
Yoni Amit ◽  
Neta Shvil ◽  
Anton Sheinin ◽  
Amir Ayali ◽  
...  

During hundreds of millions of years of evolution, insects have evolved some of the most efficient and robust sensing organs, often far more sensitive than their man-made equivalents. In this study, we demonstrate a hybrid bio-technological approach, integrating a locust tympanic ear with a robotic platform. Using an Ear-on-a-Chip method, we manage to create a long-lasting miniature sensory device that operates as part of a bio-hybrid robot. The neural signals recorded from the ear in response to sound pulses, are processed and used to control the robot’s motion. This work is a proof of concept, demonstrating the use of biological ears for robotic sensing and control.


Sign in / Sign up

Export Citation Format

Share Document