scholarly journals Improved Speech in Noise Perception in the Elderly After 6 Months of Musical Instruction

2021 ◽  
Vol 15 ◽  
Author(s):  
Florian Worschech ◽  
Damien Marie ◽  
Kristin Jünemann ◽  
Christopher Sinke ◽  
Tillmann H. C. Krüger ◽  
...  

Understanding speech in background noise poses a challenge in daily communication, which is a particular problem among the elderly. Although musical expertise has often been suggested to be a contributor to speech intelligibility, the associations are mostly correlative. In the present multisite study conducted in Germany and Switzerland, 156 healthy, normal-hearing elderly were randomly assigned to either piano playing or music listening/musical culture groups. The speech reception threshold was assessed using the International Matrix Test before and after a 6 month intervention. Bayesian multilevel modeling revealed an improvement of both groups over time under binaural conditions. Additionally, the speech reception threshold of the piano group decreased during stimuli presentation to the left ear. A right ear improvement only occurred in the German piano group. Furthermore, improvements were predominantly found in women. These findings are discussed in the light of current neuroscientific theories on hemispheric lateralization and biological sex differences. The study indicates a positive transfer from musical training to speech processing, probably supported by the enhancement of auditory processing and improvement of general cognitive functions.

2018 ◽  
Author(s):  
D Lesenfants ◽  
J Vanthornhout ◽  
E Verschueren ◽  
L Decruy ◽  
T Francart

ABSTRACTObjectiveTo objectively measure speech intelligibility of individual subjects from the EEG, based on cortical tracking of different representations of speech: low-level acoustical, higher-level discrete, or a combination. To compare each model’s prediction of the speech reception threshold (SRT) for each individual with the behaviorally measured SRT.MethodsNineteen participants listened to Flemish Matrix sentences presented at different signal-to-noise ratios (SNRs), corresponding to different levels of speech understanding. For different EEG frequency bands (delta, theta, alpha, beta or low-gamma), a model was built to predict the EEG signal from various speech representations: envelope, spectrogram, phonemes, phonetic features or a combination of phonetic Features and Spectrogram (FS). The same model was used for all subjects. The model predictions were then compared to the actual EEG of each subject for the different SNRs, and the prediction accuracy in function of SNR was used to predict the SRT.ResultsThe model based on the FS speech representation and the theta EEG band yielded the best SRT predictions, with a difference between the behavioral and objective SRT below 1 decibel for 53% and below 2 decibels for 89% of the subjects.ConclusionA model including low- and higher-level speech features allows to predict the speech reception threshold from the EEG of people listening to natural speech. It has potential applications in diagnostics of the auditory system.Search Termscortical speech tracking, objective measure, speech intelligibility, auditory processing, speech representations.HighlightsObjective EEG-based measure of speech intelligibilityImproved prediction of speech intelligibility by combining speech representationsCortical tracking of speech in the delta EEG band monotonically increased with SNRsCortical responses in the theta EEG band best predicted the speech reception thresholdDisclosureThe authors report no disclosures relevant to the manuscript.


Author(s):  
Gregg Recanzone

Age-related hearing loss affects over half of the elderly population, yet it remains poorly understood. Natural aging can cause the input to the brain from the cochlea to be progressively compromised in most individuals, but in many cases the cochlea has relatively normal sensitivity and yet people have an increasingly difficult time processing complex auditory stimuli. The two main deficits are in sound localization and temporal processing, which lead to poor speech perception. Animal models have shown that there are multiple changes in the brainstem, midbrain, and thalamic auditory areas as a function of age, giving rise to an alteration in the excitatory/inhibitory balance of these neurons. This alteration is manifest in the cerebral cortex as higher spontaneous and driven firing rates, as well as broader spatial and temporal tuning. These alterations in cortical responses could underlie the hearing and speech processing deficits that are common in the aged population.


2021 ◽  
Vol 11 (19) ◽  
pp. 8833
Author(s):  
Alfredo Raglio ◽  
Paola Baiardi ◽  
Giuseppe Vizzari ◽  
Marcello Imbriani ◽  
Mauro Castelli ◽  
...  

This study assessed the short-term effects of conventional (i.e., human-composed) and algorithmic music on the relaxation level. It also investigated whether algorithmic compositions are perceived as music and are distinguishable from human-composed music. Three hundred twenty healthy volunteers were recruited and randomly allocated to two groups where they listened to either their preferred music or algorithmic music. Another 179 healthy subjects were allocated to four listening groups that respectively listened to: music composed and performed by a human, music composed by a human and performed by a machine; music composed by a machine and performed by a human, music composed and performed by a machine. In the first experiment, participants underwent one of the two music listening conditions—preferred or algorithmic music—in a comfortable state. In the second one, participants were asked to evaluate, through an online questionnaire, the musical excerpts they listened to. The Visual Analogue Scale was used to evaluate their relaxation levels before and after the music listening experience. Other outcomes were evaluated through the responses to the questionnaire. The relaxation level obtained with the music created by the algorithms is comparable to the one achieved with preferred music. Statistical analysis shows that the relaxation level is not affected by the composer, the performer, or the existence of musical training. On the other hand, the perceived effect is related to the performer. Finally, music composed by an algorithm and performed by a human is not distinguishable from that composed by a human.


2013 ◽  
Vol 24 (04) ◽  
pp. 307-328 ◽  
Author(s):  
Joshua G.W. Bernstein ◽  
Van Summers ◽  
Elena Grassi ◽  
Ken W. Grant

Background: Hearing-impaired (HI) individuals with similar ages and audiograms often demonstrate substantial differences in speech-reception performance in noise. Traditional models of speech intelligibility focus primarily on average performance for a given audiogram, failing to account for differences between listeners with similar audiograms. Improved prediction accuracy might be achieved by simulating differences in the distortion that speech may undergo when processed through an impaired ear. Although some attempts to model particular suprathreshold distortions can explain general speech-reception deficits not accounted for by audibility limitations, little has been done to model suprathreshold distortion and predict speech-reception performance for individual HI listeners. Auditory-processing models incorporating individualized measures of auditory distortion, along with audiometric thresholds, could provide a more complete understanding of speech-reception deficits by HI individuals. A computational model capable of predicting individual differences in speech-recognition performance would be a valuable tool in the development and evaluation of hearing-aid signal-processing algorithms for enhancing speech intelligibility. Purpose: This study investigated whether biologically inspired models simulating peripheral auditory processing for individual HI listeners produce more accurate predictions of speech-recognition performance than audiogram-based models. Research Design: Psychophysical data on spectral and temporal acuity were incorporated into individualized auditory-processing models consisting of three stages: a peripheral stage, customized to reflect individual audiograms and spectral and temporal acuity; a cortical stage, which extracts spectral and temporal modulations relevant to speech; and an evaluation stage, which predicts speech-recognition performance by comparing the modulation content of clean and noisy speech. To investigate the impact of different aspects of peripheral processing on speech predictions, individualized details (absolute thresholds, frequency selectivity, spectrotemporal modulation [STM] sensitivity, compression) were incorporated progressively, culminating in a model simulating level-dependent spectral resolution and dynamic-range compression. Study Sample: Psychophysical and speech-reception data from 11 HI and six normal-hearing listeners were used to develop the models. Data Collection and Analysis: Eleven individualized HI models were constructed and validated against psychophysical measures of threshold, frequency resolution, compression, and STM sensitivity. Speech-intelligibility predictions were compared with measured performance in stationary speech-shaped noise at signal-to-noise ratios (SNRs) of −6, −3, 0, and 3 dB. Prediction accuracy for the individualized HI models was compared to the traditional audibility-based Speech Intelligibility Index (SII). Results: Models incorporating individualized measures of STM sensitivity yielded significantly more accurate within-SNR predictions than the SII. Additional individualized characteristics (frequency selectivity, compression) improved the predictions only marginally. A nonlinear model including individualized level-dependent cochlear-filter bandwidths, dynamic-range compression, and STM sensitivity predicted performance more accurately than the SII but was no more accurate than a simpler linear model. Predictions of speech-recognition performance simultaneously across SNRs and individuals were also significantly better for some of the auditory-processing models than for the SII. Conclusions: A computational model simulating individualized suprathreshold auditory-processing abilities produced more accurate speech-intelligibility predictions than the audibility-based SII. Most of this advantage was realized by a linear model incorporating audiometric and STM-sensitivity information. Although more consistent with known physiological aspects of auditory processing, modeling level-dependent changes in frequency selectivity and gain did not result in more accurate predictions of speech-reception performance.


2021 ◽  
Author(s):  
Fabian Schmidt ◽  
Ya-Ping Chen ◽  
Anne Keitel ◽  
Sebastian Rösch ◽  
Ronny Hannemann ◽  
...  

ABSTRACTThe most prominent acoustic features in speech are intensity modulations, represented by the amplitude envelope of speech. Synchronization of neural activity with these modulations is vital for speech comprehension. As the acoustic modulation of speech is related to the production of syllables, investigations of neural speech tracking rarely distinguish between lower-level acoustic (envelope modulation) and higher-level linguistic (syllable rate) information. Here we manipulated speech intelligibility using noise-vocoded speech and investigated the spectral dynamics of neural speech processing, across two studies at cortical and subcortical levels of the auditory hierarchy, using magnetoencephalography. Overall, cortical regions mostly track the syllable rate, whereas subcortical regions track the acoustic envelope. Furthermore, with less intelligible speech, tracking of the modulation rate becomes more dominant. Our study highlights the importance of distinguishing between envelope modulation and syllable rate and provides novel possibilities to better understand differences between auditory processing and speech/language processing disorders.Abstract Figure


Author(s):  
Niken Setyaningrum ◽  
Andri Setyorini ◽  
Fachruddin Tri Fitrianta

ABSTRACTBackground: Hypertension is one of the most common diseases, because this disease is suffered byboth men and women, as well as adults and young people. Treatment of hypertension does not onlyrely on medications from the doctor or regulate diet alone, but it is also important to make our bodyalways relaxed. Laughter can help to control blood pressure by reducing endocrine stress andcreating a relaxed condition to deal with relaxation.Objective: The general objective of the study was to determine the effect of laughter therapy ondecreasing elderly blood pressure in UPT Panti Wredha Budhi Dharma Yogyakarta.Methods: The design used in this study is a pre-experimental design study with one group pre-posttestresearch design where there is no control group (comparison). The population in this study wereelderly aged over> 60 years at 55 UPT Panti Wredha Budhi Dharma Yogyakarta. The method oftaking in this study uses total sampling. The sample in this study were 55 elderly. Data analysis wasused to determine the difference in blood pressure before and after laughing therapy with a ratio datascale that was using Pairs T-TestResult: There is an effect of laughing therapy on blood pressure in the elderly at UPT Panti WredhaBudhi Dharma Yogyakarta marked with a significant value of 0.000 (P <0.05)


2012 ◽  
Vol 2012 ◽  
pp. 1-7 ◽  
Author(s):  
Joseph P. Pillion

Deficits in central auditory processing may occur in a variety of clinical conditions including traumatic brain injury, neurodegenerative disease, auditory neuropathy/dyssynchrony syndrome, neurological disorders associated with aging, and aphasia. Deficits in central auditory processing of a more subtle nature have also been studied extensively in neurodevelopmental disorders in children with learning disabilities, ADD, and developmental language disorders. Illustrative cases are reviewed demonstrating the use of an audiological test battery in patients with auditory neuropathy/dyssynchrony syndrome, bilateral lesions to the inferior colliculi, and bilateral lesions to the temporal lobes. Electrophysiological tests of auditory function were utilized to define the locus of dysfunction at neural levels ranging from the auditory nerve, midbrain, and cortical levels.


2021 ◽  
pp. 025576142110272
Author(s):  
Oriana Incognito ◽  
Laura Scaccioni ◽  
Giuliana Pinto

A number of studies suggest a link between musical training and both specific and general cognitive abilities, but despite some positive results, there is disagreement about which abilities are improved. This study aims to investigate the effects of a music education program both on a domain-specific competence (meta-musical awareness), and on general domain competences, that is, cognitive abilities (logical-mathematical) and symbolic-linguistic abilities (notational). Twenty 4- to 6-year-old children participated in the research, divided into two groups (experimental and control) and the measures were administered at two different times, before and after a 6-month music program (for the experimental group) and after a sports training program (for the control group). Children performed meta-musical awareness tasks, logical-mathematical tasks, and emergent-alphabetization tasks. Non-parametric statistics show that a music program significantly improves the development of notational skills and meta-musical awareness while not the development of logical-mathematical skills. These results show that a musical program increases children’s meta-musical awareness, and their ability to acquire the notational ability involved in the invented writing of words and numbers. On the contrary, it does not affect the development of logical skills. The results are discussed in terms of transfer of knowledge processes and of specific versus general domain effects of a musical program.


2021 ◽  
Vol 11 (8) ◽  
pp. 982
Author(s):  
Ashley G. Flagge ◽  
Mary Ellen Neeley ◽  
Tara M. Davis ◽  
Victoria S. Henbest

Musical training has been shown to have a positive influence on a variety of skills, including auditory-based tasks and nonmusical cognitive and executive functioning tasks; however, because previous investigations have yielded mixed results regarding the relationship between musical training and these skills, the purpose of this study was to examine and compare the auditory processing skills of children who receive focused, daily musical training with those with more limited, generalized musical training. Sixteen typically developing children (second–fourth grade) from two different schools receiving different music curricula were assessed on measures of pitch discrimination, temporal sequencing, and prosodic awareness. The results indicated significantly better scores in pitch discrimination abilities for the children receiving daily, focused musical training (School 1) compared to students attending music class only once per week, utilizing a more generalized elementary school music curriculum (School 2). The findings suggest that more in-depth and frequent musical training may be associated with better pitch discrimination abilities in children. This finding is important given that the ability to discriminate pitch has been linked to improved phonological processing skills, an important skill for developing spoken language and literacy. Future investigations are needed to determine whether the null findings for temporal sequencing and prosodic awareness can be replicated or may be different for various grades and tasks for measuring these abilities.


2021 ◽  
Vol 180 ◽  
pp. 108129
Author(s):  
Jiazhong Zeng ◽  
Jianxin Peng ◽  
Xiaoming Zhou
Keyword(s):  

Sign in / Sign up

Export Citation Format

Share Document