scholarly journals Lizards speed up visual displays in noisy motion habitats

2007 ◽  
Vol 274 (1613) ◽  
pp. 1057-1062 ◽  
Author(s):  
Terry J Ord ◽  
Richard A Peters ◽  
Barbara Clucas ◽  
Judy A Stamps

Extensive research over the last few decades has revealed that many acoustically communicating animals compensate for the masking effect of background noise by changing the structure of their signals. Familiar examples include birds using acoustic properties that enhance the transmission of vocalizations in noisy habitats. Here, we show that the effects of background noise on communication signals are not limited to the acoustic modality, and that visual noise from windblown vegetation has an equally important influence on the production of dynamic visual displays. We found that two species of Puerto Rican lizard, Anolis cristatellus and A. gundlachi , increase the speed of body movements used in territorial signalling to apparently improve communication in visually ‘noisy’ environments of rapidly moving vegetation. This is the first evidence that animals change how they produce dynamic visual signals when communicating in noisy motion habitats. Taken together with previous work on acoustic communication, our results show that animals with very different sensory ecologies can face similar environmental constraints and adopt remarkably similar strategies to overcome these constraints.

Author(s):  
Wonhee Lee ◽  
Chanil Chun ◽  
Dongwook Kim ◽  
Soogab Lee

Complex transportation systems often produce combined exposure to aircraft and road noise. Depending on the noise source, the annoyance response is different, and a masking effect occurs between the noise sources within the combined noise. Considering these characteristics, partial loudness was adopted to evaluate noise annoyance. First, a partial loudness model incorporating binaural inhibition was proposed and validated. Second, short- and long-term annoyance models were developed using partial loudness. Finally, the annoyance of combined noise was visualized as a map. These models can evaluate the annoyance by considering both the intensity and frequency characteristics of the noise. In addition, it is possible to quantify the masking effect that occurs between noise sources. Combined noise annoyance maps depict the degree of annoyance of residents and show the background noise effect, which is not seen on general noise maps.


1973 ◽  
Vol 59 (2) ◽  
pp. 415-424
Author(s):  
PER S. ENGER

1. The nervous activity of single auditory neurones in goldfish brain have been measured. 2. Four types of acoustic stimuli were used, (1) pure tones, (2) noise of one-third octave band width, (3) noise of one-octave band width with centre frequency equal to the pure tone, and (4) white noise. 3. Except for white noise, these stimuli produced the same response to equal sound pressures. The white noise response was less, presumably because the frequency range covered by a single neurone is far narrower than the range of white noise. 4. The conclusion has been reached that for low-frequency acoustic signals, the acoustic power over a frequency band of one to two octaves is integrated by the nervous system. 5. The masking effect of background noise on the acoustic threshold of single units to pure tones is strongest when the noise band has the same centre frequency as the test tone. In this case the tone threshold increases linearly with the background noise level. 6. When the noise band was centred at a different frequency from the tone, the masking effect decreased at a rate of 20-22 dB/octave for the first one-third octave for a tone frequency of 250 Hz. For a tone of 500 Hz the masking effect of lower frequencies was stronger and was reduced by only some 9 dB/octave for the first one-third octave.


Author(s):  
Eric D. Young ◽  
Donata Oertel

Neuronal circuits in the brainstem convert the output of the ear, which carries the acoustic properties of ongoing sound, to a representation of the acoustic environment that can be used by the thalamocortical system. Most important, brainstem circuits reflect the way the brain uses acoustic cues to determine where sounds arise and what they mean. The circuits merge the separate representations of sound in the two ears and stabilize them in the face of disturbances such as loudness fluctuation or background noise. Embedded in these systems are some specialized analyses that are driven by the need to resolve tiny differences in the time and intensity of sounds at the two ears and to resolve rapid temporal fluctuations in sounds like the sequence of notes in music or the sequence of syllables in speech.


Author(s):  
Çağlar Akçay ◽  
Michelle L Beck ◽  
Kendra B Sewall

Abstract How anthropogenic change affects animal social behavior, including communication is an important question. Urban noise often drives shifts in acoustic properties of signals but the consequences of noise for the honesty of signals—that is, how well they predict signaler behavior—is unclear. Here we examine whether honesty of aggressive signaling is compromised in male urban song sparrows (Melospiza melodia). Song sparrows have two honest close-range signals: the low amplitude soft songs (an acoustic signal) and wing waves (a visual signal), but whether the honesty of these signals is affected by urbanization has not been examined. If soft songs are less effective in urban noise, we predict that they should predict attacks less reliably in urban habitats compared to rural habitats. We confirmed earlier findings that urban birds were more aggressive than rural birds and found that acoustic noise was higher in urban habitats. Urban birds still sang more soft songs than rural birds. High rates of soft songs and low rates of loud songs predicted attacks in both habitats. Thus, while urbanization has a significant effect on aggressive behaviors, it might have a limited effect on the overall honesty of aggressive signals in song sparrows. We also found evidence for a multimodal shift: urban birds tended to give proportionally more wing waves than soft songs than rural birds, although whether that shift is due to noise-dependent plasticity is unclear. These findings encourage further experimental study of the specific variables that are responsible for behavioral change due to urbanization. Soft song, the low amplitude songs given in close range interactions, is an honest threat signal in urban song sparrows. Given its low amplitude, soft songs may be a less effective signal in noisy urban habitats. However, we found that soft song remained an honest signal predicting attack in urban habitats. We also found that birds may use more visual signals (rapid fluttering of wings) in urban habitats to avoid masking from acoustic noise.


2020 ◽  
Vol 17 (5) ◽  
pp. 172988142093233
Author(s):  
Ying Zhang ◽  
Wendong Li ◽  
Yonghe Yu ◽  
Ya Xiao ◽  
Dongyu Xu ◽  
...  

The underwater environment is extremely complex and variable, which makes it difficult for underwater robots detecting or recognizing surroundings using images acquired with cameras. Ghost imaging as a new imaging technique has attracted much attention due to its special physical properties and potential for imaging of objects in optically harsh or noisy environments. In this work, we experimentally study three categories of image reconstruction methods of ghost imaging for objects of different transmittance. For high-transmittance objects, the differential ghost imaging is more efficient than traditional ghost imaging. However, for low-transmittance objects, the reconstructed images using traditional ghost imaging and differential ghost imaging algorithms are both exceedingly blurred and cannot be improved by increasing the number of measurements. A compressive sensing method named augmented Lagrangian and alternating direction algorithm (TVAL3) is proposed to reduce the background noise imposed by the low-transmittance. Experimental results show that compressive ghost imaging can dramatically subtract the background noise and enhance the contrast of the image. The relationship between the quality of the reconstructed image and the complexity of object itself is also discussed.


2007 ◽  
Vol 97 (2) ◽  
pp. 1470-1484 ◽  
Author(s):  
Yale E. Cohen ◽  
Frédéric Theunissen ◽  
Brian E. Russ ◽  
Patrick Gill

Communication is one of the fundamental components of both human and nonhuman animal behavior. Auditory communication signals (i.e., vocalizations) are especially important in the socioecology of several species of nonhuman primates such as rhesus monkeys. In rhesus, the ventrolateral prefrontal cortex (vPFC) is thought to be part of a circuit involved in representing vocalizations and other auditory objects. To further our understanding of the role of the vPFC in processing vocalizations, we characterized the spectrotemporal features of rhesus vocalizations, compared these features with other classes of natural stimuli, and then related the rhesus-vocalization acoustic features to neural activity. We found that the range of these spectrotemporal features was similar to that found in other ensembles of natural stimuli, including human speech, and identified the subspace of these features that would be particularly informative to discriminate between different vocalizations. In a first neural study, however, we found that the tuning properties of vPFC neurons did not emphasize these particularly informative spectrotemporal features. In a second neural study, we found that a first-order linear model (the spectrotemporal receptive field) is not a good predictor of vPFC activity. The results of these two neural studies are consistent with the hypothesis that the vPFC is not involved in coding the first-order acoustic properties of a stimulus but is involved in processing the higher-order information needed to form representations of auditory objects.


2020 ◽  
Vol ahead-of-print (ahead-of-print) ◽  
Author(s):  
Kirsten van den Heuij ◽  
Theo Goverts ◽  
Karin Neijenhuis ◽  
Martine Coene

PurposeAs oral communication in higher education is vital, good classroom acoustics is needed to pass the verbal message to university students. Non-auditory factors such as academic language, a non-native educational context and a diversity of acoustic settings in different types of classrooms affect speech understanding and performance of students. The purpose of this study is to find out whether the acoustic properties of the higher educational teaching contexts meet the recommended reference levels.Design/methodology/approachBackground noise levels and the Speech Transmission Index (STI) were assessed in 45 unoccupied university classrooms (15 lecture halls, 16 regular classrooms and 14 skills laboratories).FindingsThe findings of this study indicate that 41 classrooms surpassed the maximum reference level for background noise of 35 dB(A) and 17 exceeded the reference level of 40 dB(A). At five-meter distance facing the speaker, six classrooms indicated excellent speech intelligibility, while at more representative listening positions, none of the classrooms indicated excellent speech intelligibility. As the acoustic characteristics in a majority of the classrooms exceeded the available reference levels, speech intelligibility was likely to be insufficient.Originality/valueThis study seeks to assess the acoustics in academic classrooms against the available acoustic reference levels. Non-acoustic factors, such as academic language complexity and (non-)nativeness of the students and teaching staff, put higher cognitive demands upon listeners in higher education and need to be taken into account when using them in daily practice for regular students and students with language/hearing disabilities in particular.


Author(s):  
Agustín J Elias-Costa ◽  
Julián Faivovich

Abstract Cascades and fast-flowing streams impose severe restrictions on acoustic communication, with loud broadband background noise hampering signal detection and recognition. In this context, diverse behavioural features, such as ultrasound production and visual displays, have arisen in the evolutionary history of torrent-dwelling amphibians. The importance of the vocal sac in multimodal communication is being increasingly recognized, and recently a new vocal sac visual display has been discovered: unilateral inflation of paired vocal sacs. In the diurnal stream-breeding Hylodidae from the Atlantic forest, where it was first described, this behaviour is likely to be enabled by a unique anatomical configuration of the vocal sacs. To assess whether other taxa share this exceptional structure, we surveyed torrent-dwelling species with paired vocal sacs across the anuran tree of life and examined the vocal sac anatomy of exemplar species across 18 families. We found striking anatomical convergence among hylodids and species of the distantly related basal ranid genera Staurois, Huia, Meristogenys and Amolops. Ancestral character state reconstruction identified three new synapomorphies for Ranidae. Furthermore, we surveyed the vocal sac configuration of other anuran species that perform visual displays and report observations on what appears to be unilateral inflation of paired vocal sacs, in Staurois guttatus – an extremely rare behaviour in anurans.


2017 ◽  
Vol 42 (2) ◽  
pp. 333-345 ◽  
Author(s):  
Rostam Golmohammadi ◽  
Mohsen Aliabadi ◽  
Trifah Nezami

Abstract Tasks requiring intensive concentration are more vulnerable to noise than routine tasks. Due to the high mental workload of bank employees, this study aimed to evaluate acoustic comfort in open-space banks based on speech intelligibility and noise annoyance metrics. Acoustic metrics including preferred noise criterion (PNC), speech transmission index (STI), and signal to noise ratio (SNR) were measured in seventeen banks (located in Hamadan, a western province of Iran). For subjective noise annoyance assessments, 100-point noise annoyance scales were completed by bank employees during activities. Based on STI (0.56±0.09) and SNR (20.5±8.2 dB) values, it was found that speech intelligibilities in the workstations of banks were higher than the satisfactory level. However, PNC values in bank spaces were 48.2±5.5 dB, which is higher than the recommended limit value for public spaces. In this regard, 95% of the employees are annoyed by background noise levels. The results show irrelevant speech is the main source of subjective noise annoyance among employees. Loss of concentration is the main consequence of background noise levels for employees. The results confirmed that acoustic properties of bank spaces provide enough speech intelligibility, while staff’s noise annoyance is not acceptable. It can be concluded that due to proximity of workstations in open-space banks, access to very short distraction distance is necessary. Therefore, increasing speech privacy can be prioritised to speech intelligibility. It is recommended that current desk screens are redesigned in order to reduce irrelevant speech between nearby workstations. Staff’s training about acoustic comfort can also manage irrelevant speech characteristics during work time.


Sensors ◽  
2022 ◽  
Vol 22 (1) ◽  
pp. 374
Author(s):  
Mohamed Nabih Ali ◽  
Daniele Falavigna ◽  
Alessio Brutti

Robustness against background noise and reverberation is essential for many real-world speech-based applications. One way to achieve this robustness is to employ a speech enhancement front-end that, independently of the back-end, removes the environmental perturbations from the target speech signal. However, although the enhancement front-end typically increases the speech quality from an intelligibility perspective, it tends to introduce distortions which deteriorate the performance of subsequent processing modules. In this paper, we investigate strategies for jointly training neural models for both speech enhancement and the back-end, which optimize a combined loss function. In this way, the enhancement front-end is guided by the back-end to provide more effective enhancement. Differently from typical state-of-the-art approaches employing on spectral features or neural embeddings, we operate in the time domain, processing raw waveforms in both components. As application scenario we consider intent classification in noisy environments. In particular, the front-end speech enhancement module is based on Wave-U-Net while the intent classifier is implemented as a temporal convolutional network. Exhaustive experiments are reported on versions of the Fluent Speech Commands corpus contaminated with noises from the Microsoft Scalable Noisy Speech Dataset, shedding light and providing insight about the most promising training approaches.


Sign in / Sign up

Export Citation Format

Share Document