The Impact of Musicianship on the Cortical Mechanisms Related to Separating Speech from Background Noise

2015 ◽  
Vol 27 (5) ◽  
pp. 1044-1059 ◽  
Author(s):  
Benjamin Rich Zendel ◽  
Charles-David Tremblay ◽  
Sylvie Belleville ◽  
Isabelle Peretz

Musicians have enhanced auditory processing abilities. In some studies, these abilities are paralleled by an improved understanding of speech in noisy environments, partially due to more robust encoding of speech signals in noise at the level of the brainstem. Little is known about the impact of musicianship on attention-dependent cortical activity related to lexical access during a speech-in-noise task. To address this issue, we presented musicians and nonmusicians with single words mixed with three levels of background noise, across two conditions, while monitoring electrical brain activity. In the active condition, listeners repeated the words aloud, and in the passive condition, they ignored the words and watched a silent film. When background noise was most intense, musicians repeated more words correctly compared with nonmusicians. Auditory evoked responses were attenuated and delayed with the addition of background noise. In musicians, P1 amplitude was marginally enhanced during active listening and was related to task performance in the most difficult listening condition. By comparing ERPs from the active and passive conditions, we isolated an N400 related to lexical access. The amplitude of the N400 was not influenced by the level of background noise in musicians, whereas N400 amplitude increased with the level of background noise in nonmusicians. In nonmusicians, the increase in N400 amplitude was related to a reduction in task performance. In musicians only, there was a rightward shift of the sources contributing to the N400 as the level of background noise increased. This pattern of results supports the hypothesis that encoding of speech in noise is more robust in musicians and suggests that this facilitates lexical access. Moreover, the shift in sources suggests that musicians, to a greater extent than nonmusicians, may increasingly rely on acoustic cues to understand speech in noise.

2018 ◽  
Author(s):  
Johanna Wind ◽  
Wolfgang Schöllhorn

AbstractDance as one of the earliest cultural assets of mankind is practised in different cultures, mostly for wellbeing or for treating psycho-physiological disorders like Parkinson, depression, autism. However, the underlying neurophysiological mechanisms are still unclear and only few studies address the effects of particular dance styles. For a first impression, we were interested in the effects of modern jazz dance (MJD) on the brain activation that would contribute to the understanding of these mechanisms. 11 female subjects rehearsed a MJD choreography for three weeks (1h per week) and passed electroencephalographic (EEG) measurements in a crossover-design thereafter. The objectives were to establish the differences between dancing physically and participating just mentally with or without music. Therefore, each subject realized the four following test conditions: dancing physically to and without music, dancing mentally to and without music. Each of the conditions were performed for 15 minutes. Before and after each condition, the EEG activities were recorded under resting conditions (2 min. eyes-open, 2 min. eyes-closed) followed by a subsequent wash-out phase of 10 minutes.The results of the study revealed no time effects for the mental dancing conditions, either to or without music. An increased electrical brain activation was followed by the physical dancing conditions with and without music for the theta, alpha-1, alpha-2, beta and gamma frequency band across the entire scalp. Especially the higher frequencies (alpha-2, beta, gamma) showed increased brain activation across all brain areas. Higher brain activities for the physical dancing conditions were identified in comparison to the mental dancing condition. No statistically significant differences could be found as to dancing to or without music. Our findings demonstrate evidence for the immediate influence of modern jazz dance and its sweeping effects on all brain areas for all measured frequency bands, when dancing physically. In comparison, dancing just mentally does not result in similar effects.


2020 ◽  
pp. 030573562095361
Author(s):  
Ebtesam Sajjadi ◽  
Ali Mohammadzadeh ◽  
Nushin Sayadi ◽  
Ahmadreza Nazeri ◽  
Seyyed Mehdi Tabatabai

Everyday communication mostly occurs in the presence of various background noises and competing talkers. Studies have shown that musical training could have a positive effect on auditory processing, particularly in challenging listening situations. To our knowledge, no groups have specifically studied the advantage of musical training on perception of consonants in the presence of background noise. We hypothesized that musician advantage in speech in noise processing may also result in enhanced perception of speech units such as consonants in noise. Therefore, this study aimed to compare the recognition of stops and fricatives, which constitute the highest number of Persian consonants, in the presence of 12-talker babble noise between musicians and non-musicians. For this purpose, stops and fricatives presented in the consonant-vowel-consonant format and embedded in three signal-to-noise ratios of 0, −5, and −10 dB. The study was conducted on 40 young listeners (20 musicians and 20 non-musicians) with normal hearing. Our outcome indicated that musicians outperformed the non-musicians in recognition of stops and fricatives in all three signal-to-noise ratios. These findings provide important evidence about the impact of musical instruction on processing of consonants and highlight the role of musical training on perceptual abilities.


Author(s):  
Behieh Kohansal ◽  
Mehdi Asghari ◽  
Sirvan Najafi ◽  
Fahimeh Hamedi

Background and Aim: Tinnitus is one of the most difficult challenges in audiology and oto­logy. Previous studies have been shown that tinn­itus may interfere with the function of central auditory system (CAS). Involvement of CAS abilities including speech perception and audi­tory processing has led to serious problems in people with tinnitus. Due to the lack of enough information about the impact of tinnitus on CAS and its function, and given that there is no standardized protocol for assessment and mana­gement of tinnitus, this study aimed to review the studies on the effect of tinnitus on the CAS function. Recent Findings: Sixteen eligible articles were reviewed. Temporal and spectral resolution, fre­quency differentiation and speech perception deficits were reported in patients with tinnitus, especially in background noise. This was repor­ted even in tinnitus patients with normal hearing. Conclusion: Assessment of central auditory pro­cessing and speech perception in noise seems to be useful for proper management of tinnitus in clinical practice. Keywords: Tinnitus; auditory system; central auditory processing; speech in noise performance  


2021 ◽  
Author(s):  
Annalisa Pascarella ◽  
Ezequiel Mikulan ◽  
Federica Sciacchitano ◽  
Simone Sarasso ◽  
Annalisa Rubino ◽  
...  

AbstractElectrical source imaging (ESI) aims at reconstructing the electrical brain activity from measurements of the electric field on the scalp. Even though the localization of single focal sources should be relatively straightforward, different methods provide diverse solutions due to the different underlying assumptions. Furthermore, their input parameter(s) further affects the solution provided by each method, making localization even more challenging. In addition, validations and comparisons are typically performed either on synthetic data or through post-operative outcomes, in both cases with considerable limitations.We use an in-vivo high-density EEG dataset recorded during intracranial single pulse electrical stimulation, in which the true sources are substantially dipolar and their locations are known. We compare ten different ESI methods under multiple choices of input parameters, to assess the accuracy of the best reconstruction, as well as the impact of the parameters on the localization performance.Best reconstructions often fall within 1 cm from the true source, with more accurate methods outperforming less accurate ones by 1 cm, on average. Expectedly, dipolar methods tend to outperform distributed methods. Sensitivity to input parameters varies widely between methods. Depth weighting played no role for three out of six methods implementing it. In terms of regularization parameters, for several distributed methods SNR=1 unexpectedly turned out to be the best choice among the tested ones.Our data show similar levels of accuracy of ESI techniques when applied to “conventional” (32 channels) and dense (64, 128, 256 channels) EEG recordings.Overall findings reinforce the importance that ESI may have in the clinical context, especially when applied to identify the surgical target in potential candidates for epilepsy surgery.


2017 ◽  
Vol 60 (8) ◽  
pp. 2297-2309 ◽  
Author(s):  
Elina Niemitalo-Haapola ◽  
Sini Haapala ◽  
Teija Kujala ◽  
Antti Raappana ◽  
Tiia Kujala ◽  
...  

Purpose The aim of this study was to investigate developmental and noise-induced changes in central auditory processing indexed by event-related potentials in typically developing children. Method P1, N2, and N4 responses as well as mismatch negativities (MMNs) were recorded for standard syllables and consonants, frequency, intensity, vowel, and vowel duration changes in silent and noisy conditions in the same 14 children at the ages of 2 and 4 years. Results The P1 and N2 latencies decreased and the N2, N4, and MMN amplitudes increased with development of the children. The amplitude changes were strongest at frontal electrodes. At both ages, background noise decreased the P1 amplitude, increased the N2 amplitude, and shortened the N4 latency. The noise-induced amplitude changes of P1, N2, and N4 were strongest frontally. Furthermore, background noise degraded the MMN. At both ages, MMN was significantly elicited only by the consonant change, and at the age of 4 years, also by the vowel duration change during noise. Conclusions Developmental changes indexing maturation of central auditory processing were found from every response studied. Noise degraded sound encoding and echoic memory and impaired auditory discrimination at both ages. The older children were as vulnerable to the impact of noise as the younger children. Supplemental materials https://doi.org/10.23641/asha.5233939


2021 ◽  
Author(s):  
Maansi Desai ◽  
Jade Holder ◽  
Cassandra Villarreal ◽  
Nat Clark ◽  
Liberty S. Hamilton

AbstractIn natural conversations, listeners must attend to what others are saying while ignoring extraneous background sounds. Recent studies have used encoding models to predict electroencephalography (EEG) responses to speech in noise-free listening situations, sometimes referred to as “speech tracking” in EEG. Researchers have analyzed how speech tracking changes with different types of background noise. It is unclear, however, whether neural responses from noisy and naturalistic environments can be generalized to more controlled stimuli. If encoding models for noisy, naturalistic stimuli are generalizable to other tasks, this could aid in data collection from populations who may not tolerate listening to more controlled, less-engaging stimuli for long periods of time. We recorded non-invasive scalp EEG while participants listened to speech without noise and audiovisual speech stimuli containing overlapping speakers and background sounds. We fit multivariate temporal receptive field (mTRF) encoding models to predict EEG responses to pitch, the acoustic envelope, phonological features, and visual cues in both noise-free and noisy stimulus conditions. Our results suggested that neural responses to naturalistic stimuli were generalizable to more controlled data sets. EEG responses to speech in isolation were predicted accurately using phonological features alone, while responses to noisy speech were more accurate when including both phonological and acoustic features. These findings may inform basic science research on speech-in-noise processing. Ultimately, they may also provide insight into auditory processing in people who are hard of hearing, who use a combination of audio and visual cues to understand speech in the presence of noise.Significance StatementUnderstanding spoken language in natural environments requires listeners to parse acoustic and linguistic information in the presence of other distracting stimuli. However, most studies of auditory processing rely on highly controlled stimuli with no background noise, or with background noise inserted at specific times. Here, we compare models where EEG data are predicted based on a combination of acoustic, phonetic, and visual features in highly disparate stimuli – sentences from a speech corpus, and speech embedded within movie trailers. We show that modeling neural responses to highly noisy, audiovisual movies can uncover tuning for acoustic and phonetic information that generalizes to simpler stimuli typically used in sensory neuroscience experiments.


2019 ◽  
Vol 33 (2) ◽  
pp. 109-118
Author(s):  
Andrés Antonio González-Garrido ◽  
Jacobo José Brofman-Epelbaum ◽  
Fabiola Reveca Gómez-Velázquez ◽  
Sebastián Agustín Balart-Sánchez ◽  
Julieta Ramos-Loyo

Abstract. It has been generally accepted that skipping breakfast adversely affects cognition, mainly disturbing the attentional processes. However, the effects of short-term fasting upon brain functioning are still unclear. We aimed to evaluate the effect of skipping breakfast on cognitive processing by studying the electrical brain activity of young healthy individuals while performing several working memory tasks. Accordingly, the behavioral results and event-related brain potentials (ERPs) of 20 healthy university students (10 males) were obtained and compared through analysis of variances (ANOVAs), during the performance of three n-back working memory (WM) tasks in two morning sessions on both normal (after breakfast) and 12-hour fasting conditions. Significantly fewer correct responses were achieved during fasting, mainly affecting the higher WM load task. In addition, there were prolonged reaction times with increased task difficulty, regardless of breakfast intake. ERP showed a significant voltage decrement for N200 and P300 during fasting, while the amplitude of P200 notably increased. The results suggest skipping breakfast disturbs earlier cognitive processing steps, particularly attention allocation, early decoding in working memory, and stimulus evaluation, and this effect increases with task difficulty.


2015 ◽  
Vol 29 (4) ◽  
pp. 135-146 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Szczepan J. Grzybowski ◽  
Jan Kaiser

Abstract. In the study, the neural basis of emotional reactivity was investigated. Reactivity was operationalized as the impact of emotional pictures on the self-reported ongoing affective state. It was used to divide the subjects into high- and low-responders groups. Independent sources of brain activity were identified, localized with the DIPFIT method, and clustered across subjects to analyse the visual evoked potentials to affective pictures. Four of the identified clusters revealed effects of reactivity. The earliest two started about 120 ms from the stimulus onset and were located in the occipital lobe and the right temporoparietal junction. Another two with a latency of 200 ms were found in the orbitofrontal and the right dorsolateral cortices. Additionally, differences in pre-stimulus alpha level over the visual cortex were observed between the groups. The attentional modulation of perceptual processes is proposed as an early source of emotional reactivity, which forms an automatic mechanism of affective control. The role of top-down processes in affective appraisal and, finally, the experience of ongoing emotional states is also discussed.


1981 ◽  
Vol 20 (03) ◽  
pp. 169-173
Author(s):  
J. Wagner ◽  
G. Pfurtscheixer

The shape, latency and amplitude of changes in electrical brain activity related to a stimulus (Evoked Potential) depend both on the stimulus parameters and on the background EEG at the time of stimulation. An adaptive, learnable stimulation system is introduced, whereby the subject is stimulated (e.g. with light), whenever the EEG power is subthreshold and minimal. Additionally, the system is conceived in such a way that a certain number of stimuli could be given within a particular time interval. Related to this time criterion, the threshold specific for each subject is calculated at the beginning of the experiment (preprocessing) and adapted to the EEG power during the processing mode because of long-time fluctuations and trends in the EEG. The process of adaptation is directed by a table which contains the necessary correction numbers for the threshold. Experiences of the stimulation system are reflected in an automatic correction of this table. Because the corrected and improved table is stored after each experiment and is used as the starting table for the next experiment, the system >learns<. The system introduced here can be used both for evoked response studies and for alpha-feedback experiments.


Sign in / Sign up

Export Citation Format

Share Document