auditory noise
Recently Published Documents


TOTAL DOCUMENTS

42
(FIVE YEARS 11)

H-INDEX

12
(FIVE YEARS 1)

2021 ◽  
Vol 11 (11) ◽  
pp. 1416
Author(s):  
Anna-Lisa Schuler ◽  
Giovanni Pellegrino

Background: Functional magnetic resonance imaging (fMRI) is one of the most important neuroimaging techniques; nevertheless, the acoustic noise of the MR scanner is unavoidably linked to the process of data acquisition. We hypothesized that the auditory noise of the scanner has an effect on autonomic activity. Methods: We measured heart rate variability (HRV) while exposing 30 healthy subjects to fMRI noise. In doing so, we demonstrated an increase in parasympathetic nervous system (PNS) activity compared to silence and white noise and a decrease in sympathetic nervous system (SNS) activity compared to white noise. Conclusions: The influence of MR scanner noise on the autonomic nervous system should be taken into account when performing fMRI experiments.


2021 ◽  
Vol 15 ◽  
Author(s):  
Isma Zulfiqar ◽  
Michelle Moerel ◽  
Agustin Lage-Castellanos ◽  
Elia Formisano ◽  
Peter De Weerd

Recent studies have highlighted the possible contributions of direct connectivity between early sensory cortices to audiovisual integration. Anatomical connections between the early auditory and visual cortices are concentrated in visual sites representing the peripheral field of view. Here, we aimed to engage early sensory interactive pathways with simple, far-peripheral audiovisual stimuli (auditory noise and visual gratings). Using a modulation detection task in one modality performed at an 84% correct threshold level, we investigated multisensory interactions by simultaneously presenting weak stimuli from the other modality in which the temporal modulation was barely-detectable (at 55 and 65% correct detection performance). Furthermore, we manipulated the temporal congruence between the cross-sensory streams. We found evidence for an influence of barely-detectable visual stimuli on the response times for auditory stimuli, but not for the reverse effect. These visual-to-auditory influences only occurred for specific phase-differences (at onset) between the modulated audiovisual stimuli. We discuss our findings in the light of a possible role of direct interactions between early visual and auditory areas, along with contributions from the higher-order association cortex. In sum, our results extend the behavioral evidence of audio-visual processing to the far periphery, and suggest – within this specific experimental setting – an asymmetry between the auditory influence on visual processing and the visual influence on auditory processing.


2021 ◽  
Author(s):  
Corrina Maguinness ◽  
Katharina von Kriegstein

Recognising the identity of voices is a key ingredient of communication. Visual mechanisms support this ability: recognition is better for voices previously learned with their corresponding face (compared to a control condition). This so-called ‘face-benefit’ is supported by the fusiform face area (FFA), a region sensitive to facial form and identity. Behavioural findings indicate that the face-benefit increases in noisy listening conditions. The neural mechanisms for this increase are unknown. Here, using functional magnetic resonance imaging, we examined responses in face-sensitive regions while participants recognised the identity of auditory-only speakers (previously learned by face) in high (SNR -4 dB) and low (SNR 4 dB) levels of auditory noise. We observed a face-benefit in both noise levels, for most participants (16 of 21). In high-noise, the recognition of face-learned speakers engaged the right posterior superior temporal sulcus motion-sensitive face area (pSTS-mFA), a region implicated in the processing of dynamic facial cues. The face-benefit in high-noise also correlated positively with increased functional connectivity between this region and voice-sensitive regions in the temporal lobe in the group of 16 participants with a behavioural face-benefit. In low-noise, the face-benefit was robustly associated with increased responses in the FFA and to a lesser extent the right pSTS-mFA. The findings highlight the remarkably adaptive nature of the visual network supporting voice-identity recognition in auditory-only listening conditions.


2020 ◽  
Vol 14 ◽  
Author(s):  
Jun Xie ◽  
Guozhi Cao ◽  
Guanghua Xu ◽  
Peng Fang ◽  
Guiling Cui ◽  
...  

Noise has been proven to have a beneficial role in non-linear systems, including the human brain, based on the stochastic resonance (SR) theory. Several studies have been implemented on single-modal SR. Cross-modal SR phenomenon has been confirmed in different human sensory systems. In our study, a cross-modal SR enhanced brain–computer interface (BCI) was proposed by applying auditory noise to visual stimuli. Fast Fourier transform and canonical correlation analysis methods were used to evaluate the influence of noise, results of which indicated that a moderate amount of auditory noise could enhance periodic components in visual responses. Directed transfer function was applied to investigate the functional connectivity patterns, and the flow gain value was used to measure the degree of activation of specific brain regions in the information transmission process. The results of flow gain maps showed that moderate intensity of auditory noise activated the brain area to a greater extent. Further analysis by weighted phase-lag index (wPLI) revealed that the phase synchronization between visual and auditory regions under auditory noise was significantly enhanced. Our study confirms the existence of cross-modal SR between visual and auditory regions and achieves a higher accuracy for recognition, along with shorter time window length. Such findings can be used to improve the performance of visual BCIs to a certain extent.


Author(s):  
Rafael Marin-Campos ◽  
Josep Dalmau ◽  
Albert Compte ◽  
Daniel Linares

Abstract Psychophysical tests are commonly carried out using software applications running on desktop or laptop computers, but running the software on mobile handheld devices such as smartphones or tablets could have advantages in some situations. Here, we present StimuliApp, an open-source application in which the user can create psychophysical tests on the iPad and the iPhone by means of a system of menus. A wide number of templates for creating stimuli are available including patches, gradients, gratings, checkerboards, random-dots, texts, tones or auditory noise. Images, videos and audios stored in files could also be presented. The application was developed natively for iPadOS and iOS using the low-level interface Metal for accessing the graphics processing unit, which results in high timing performance.


2020 ◽  
Vol 34 (3) ◽  
pp. 171-178
Author(s):  
Samantha Major ◽  
Kimberly Carpenter ◽  
Logan Beyer ◽  
Hannah Kwak ◽  
Geraldine Dawson ◽  
...  

Abstract. Auditory sensory gating is commonly assessed using the Paired-Click Paradigm (PCP), an electroencephalography (EEG) task in which two identical sounds are presented sequentially and the brain’s inhibitory response to the second sound is measured. Many clinical populations demonstrate reduced P50 and/or N100 suppression. Testing sensory gating in children may help to identify individuals at risk for neurodevelopmental disorders earlier, including autism spectrum disorder (ASD) and attention-deficit/hyperactivity disorder (ADHD), which could lead to more optimal outcomes. Minimal research has been done with children because of the difficulty of performing lengthy EEG experiments with young children, requiring them to sit still for long periods of time. We designed a modified, potentially child-friendly version of the PCP and evaluated it in typically developing adults. The PCP was administered twice, once in a traditional silent room (silent movie condition) and once with an audible movie playing (audible movie condition) to minimize boredom and enhance behavioral compliance. We tested whether P50 and N100 suppression were influenced by the presence of the auditory background noise from the movie. N100 suppression was observed in both hemispheres in the silent movie condition and in the left hemisphere only during the audible movie condition, though suppression was attenuated in the audible movie condition. P50 suppression was not observed in either condition. N100 sensory gating was successfully elicited with an audible movie playing during the PCP, supporting the use of the modified task for future research in both children and adults.


2020 ◽  
Author(s):  
Rafael Marin-Campos ◽  
Josep Dalmau ◽  
Albert Compte ◽  
Daniel Linares

Psychophysical tests are commonly carried out running software applications in desktop or laptop computers, but running the software in mobile handheld devices such as smartphones or tablets could have advantages in some situations. Here, we present StimuliApp, an open-source application, in which the user can create psychophysical tests on the iPad and the iPhone by means of a system of menus. A wide number of templates for creating stimuli are available including patches, gradients, gratings, checkerboards, random-dots, texts, tones or auditory noise. Images, videos and audios stored in files could also be presented. The application was developed natively for iPadOS and iOS using the low-level interface Metal for accessing the graphics processing unit, which results in high timing performance.


2020 ◽  
Vol 82 (7) ◽  
pp. 3544-3557 ◽  
Author(s):  
Jemaine E. Stacey ◽  
Christina J. Howard ◽  
Suvobrata Mitra ◽  
Paula C. Stacey

AbstractSeeing a talker’s face can aid audiovisual (AV) integration when speech is presented in noise. However, few studies have simultaneously manipulated auditory and visual degradation. We aimed to establish how degrading the auditory and visual signal affected AV integration. Where people look on the face in this context is also of interest; Buchan, Paré and Munhall (Brain Research, 1242, 162–171, 2008) found fixations on the mouth increased in the presence of auditory noise whilst Wilson, Alsius, Paré and Munhall (Journal of Speech, Language, and Hearing Research, 59(4), 601–615, 2016) found mouth fixations decreased with decreasing visual resolution. In Condition 1, participants listened to clear speech, and in Condition 2, participants listened to vocoded speech designed to simulate the information provided by a cochlear implant. Speech was presented in three levels of auditory noise and three levels of visual blurring. Adding noise to the auditory signal increased McGurk responses, while blurring the visual signal decreased McGurk responses. Participants fixated the mouth more on trials when the McGurk effect was perceived. Adding auditory noise led to people fixating the mouth more, while visual degradation led to people fixating the mouth less. Combined, the results suggest that modality preference and where people look during AV integration of incongruent syllables varies according to the quality of information available.


Sign in / Sign up

Export Citation Format

Share Document