speech motor
Recently Published Documents


TOTAL DOCUMENTS

515
(FIVE YEARS 148)

H-INDEX

46
(FIVE YEARS 6)

2022 ◽  
pp. 174702182210756
Author(s):  
Matthias K. Franken ◽  
Robert J Hartsuiker ◽  
Petter Johansson ◽  
Lars Hall ◽  
Andreas Lind

Sensory feedback plays an important role in speech motor control. One of the main sources of evidence for this are studies where online auditory feedback is perturbed during ongoing speech. In motor control, it is therefore crucial to distinguish between sensory feedback and externally generated sensory events. This is called source monitoring. Previous altered feedback studies have taken non-conscious source monitoring for granted, as automatic responses to altered sensory feedback imply that the feedback changes are processed as self-caused. However, the role of conscious source monitoring is unclear. The current study investigated whether conscious source monitoring modulates responses to unexpected pitch changes in auditory feedback. During a first block, some participants spontaneously attributed the pitch shifts to themselves (self-blamers) while others attributed them to an external source (other-blamers). Before block 2, all participants were informed that the pitch shifts were experimentally induced. The self-blamers then showed a reduction in response magnitude in block 2 compared with block 1, while the other-blamers did not. This suggests that conscious source monitoring modulates responses to altered auditory feedback, such that consciously ascribing feedback to oneself leads to larger compensation responses. These results can be accounted for within the dominant comparator framework, where conscious source monitoring could modulate the gain on sensory feedback. Alternatively, the results can be naturally explained from an inferential framework, where conscious knowledge may bias the priors in a Bayesian process to determine the most likely source of a sensory event.


2021 ◽  
Author(s):  
D.M. Ogorodnov ◽  
S.A. Evdokimov ◽  
Yu.D. Kropotov

The Methodology of Comprehensive Music and Vocal Education (CMVE) is a powerful pedagogical instrument which helps a person's to improve its voice and musicality. Because several zones, such is auditory, somatosensory, motor (mainly due to the inclusion of hands and speech motor apparatus) and visual are actively involved, which is active contributes to a change in the picture of the dominant centers of the cortex, stimulates and develops such cognitive functions as attention, speech, memory, praxis. Neuroplasticity is closely related to music education, as indicated, for example, by such work as G. Schlaug, which explains some of the sensorimotor and cognitive improvements associated with music education. This allows us to assume and test the effects of neuroplasticity when working according to the CMVE method, which also uses different modalities. To investigate event-related potentials, the authors use a two-stimulus selective attention test (VCPT Go / NoGo test). Key words: EEG, ERP, VCPT-task, musical-vocal education by D.E. Ogorodnov.


2021 ◽  
pp. 108135
Author(s):  
Helen E. Nuttall ◽  
Gwijde Maegherman ◽  
Joseph T. Devlin ◽  
Patti Adank

2021 ◽  
Vol 11 (11) ◽  
pp. 1540
Author(s):  
Jennifer U. Soriano ◽  
Abby Olivieri ◽  
Katherine C. Hustad

The Intelligibility in Context Scale (ICS) is a widely used, efficient tool for describing a child’s speech intelligibility. Few studies have explored the relationship between ICS scores and transcription intelligibility scores, which are the gold standard for clinical measurement. This study examined how well ICS composite scores predicted transcription intelligibility scores among children with cerebral palsy (CP), how well individual questions from the ICS differentially predicted transcription intelligibility scores, and how well the ICS composite scores differentiated between children with and without speech motor impairment. Parents of 48 children with CP, who were approximately 13 years of age, completed the ICS. Ninety-six adult naïve listeners provided orthographic transcriptions of children’s speech. Transcription intelligibility scores were regressed on ICS composite scores and individual item scores. Dysarthria status was regressed on ICS composite scores. Results indicated that ICS composite scores were moderately strong predictors of transcription intelligibility scores. One individual ICS item differentially predicted transcription intelligibility scores, and dysarthria severity influenced how well ICS composite scores differentiated between children with and without speech motor impairment. Findings suggest that the ICS has potential clinical utility for children with CP, especially when used with other objective measures of speech intelligibility.


2021 ◽  
Author(s):  
Ladislas Nalborczyk ◽  
Marieke Longcamp ◽  
Mireille Bonnard ◽  
Laure Spieser ◽  
F.-Xavier Alario

Humans have the ability to mentally examine speech. This covert form of speech production is often accompanied by sensory (e.g., auditory) percepts. However, the cognitive and neural mechanisms that generate these percepts are still debated. According to a prominent proposal, inner speech has at least two distinct phenomenological components: inner speaking and inner hearing. Here we use transcranial magnetic stimulation to test whether these two phenomenologically distinct processes are supported by distinct cerebral mechanisms. We hypothesise that inner speaking relies more strongly on an online motor-to-sensory simulation that constructs a multisensory experience, whereas inner hearing relies more strongly on a memory-retrieval process, where the multisensory experience is reconstructed from stored motor-to-sensory associations. We predict that the speech motor system will be involved more strongly during inner speaking than inner hearing. This will be revealed by modulations of TMS evoked responses at muscle level following cortical stimulation of the lip primary motor cortex.


Author(s):  
Sousan Salehi ◽  
Saman Maroufizadeh ◽  
Zahra Soleymani ◽  
Seyedeh Zeinab Beheshti ◽  
Sheida Bavandi

Introduction: Language processing (especially phonology) and speech motor control are disordered in stuttering. However,  it is unclear how they are related based on the models of speech processing. The present study aimed to study non-word repetition, rhyme and alliteration judgment, and speech motor control and investigate their relationship in children who stutter (CWS) compared to typically developed children (TDC). Materials and Methods: Twenty-eight CWS (mean age=5.46 years) and 28 peers TDC (mean age=5.52 years) participated in this study. Phonological processing, according to the speech processing model, is divided into phonological input and output. Phonological input, phonological output, and speech motor control were assessed by rhyme and alliteration tasks, accurate phonological production during non-word repetition task, and Robbins-Klee oral speech motor protocol, respectively. The Pearson correlation coefficient, independent t-test, and Cohen’s d were used for data analysis. Results: Both non-word repetition and speech motor skills were significantly different in CWS than TDC (P<0.001). But rhyme and alliteration judgment were similar across groups (P>0.001). Phonological processing and speech motor control were not significantly correlated (P>0.001). Conclusion: Phonological processing (output), a level before  articulation,  and  speech  motor control are not correlated, but both are disordered in preschool CWS. Additionally, phonological processing (input) is similar in CWS and TDC. That is, phonological input is not affected by stuttering in CWS.


Author(s):  
Hannah P. Rowe ◽  
Kaila L. Stipancic ◽  
Adam C. Lammert ◽  
Jordan R. Green

Purpose This study investigated the criterion (analytical and clinical) and construct (divergent) validity of a novel, acoustic-based framework composed of five key components of motor control: Coordination, Consistency, Speed, Precision, and Rate. Method Acoustic and kinematic analyses were performed on audio recordings from 22 subjects with amyotrophic lateral sclerosis during a sequential motion rate task. Perceptual analyses were completed by two licensed speech-language pathologists, who rated each subject's speech on the five framework components and their overall severity. Analytical and clinical validity were assessed by comparing performance on the acoustic features to their kinematic correlates and to clinician ratings of the five components, respectively. Divergent validity of the acoustic-based framework was then assessed by comparing performance on each pair of acoustic features to determine whether the features represent distinct articulatory constructs. Bivariate correlations and partial correlations with severity as a covariate were conducted for each comparison. Results Results revealed moderate-to-strong analytical validity for every acoustic feature, both with and without controlling for severity, and moderate-to-strong clinical validity for all acoustic features except Coordination, without controlling for severity. When severity was included as a covariate, the strong associations for Speed and Precision became weak. Divergent validity was supported by weak-to-moderate pairwise associations between all acoustic features except Speed (second-formant [F2] slope of consonant transition) and Precision (between-consonant variability in F2 slope). Conclusions This study demonstrated that the acoustic-based framework has potential as an objective, valid, and clinically useful tool for profiling articulatory deficits in individuals with speech motor disorders. The findings also suggest that compared to clinician ratings, instrumental measures are more sensitive to subtle differences in articulatory function. With further research, this framework could provide more accurate and reliable characterizations of articulatory impairment, which may eventually increase clinical confidence in the diagnosis and treatment of patients with different articulatory phenotypes.


2021 ◽  
Author(s):  
Simone Gastaldon ◽  
Pierpaolo Busan ◽  
Giorgio Arcara ◽  
Francesca Peressotti

It is well attested that people predict forthcoming information during language comprehension. The literature presents different proposals on how this ability could be implemented. Here, we tested the hypothesis according to which language production mechanisms have a role in such predictive processing. To this aim, we studied two electroencephalographic correlates of predictability during speech comprehension ‒ pre-target alpha‒beta (8-30 Hz) power decrease and the post-target N400 event-related potential (ERP) effect, ‒ in a population with impaired speech-motor control, i.e., adults who stutter (AWS), compared to typically fluent adults (TFA). Participants listened to sentences that could either constrain towards a target word or not, allowing or not to make predictions. We analyzed time-frequency modulations in a silent interval preceding the target and ERPs at the presentation of the target. Results showed that, compared to TFA, AWS display: i) a widespread and bilateral reduced power decrease in posterior temporal and parietal regions, and a power increase in anterior regions, especially in the left hemisphere (high vs. low constraining) and ii) a reduced N400 effect (non-predictable vs. predictable). The results suggest a reduced efficiency in generating predictions in AWS with respect to TFA. Additionally, the magnitude of the N400 effect in AWS is correlated with alpha power change in the right pre-motor and supplementary motor cortex, a key node in the dysfunctional network in stuttering. Overall, the results support the idea that processes and neural structures prominently devoted to speech planning and execution support prediction during language comprehension.


PLoS ONE ◽  
2021 ◽  
Vol 16 (10) ◽  
pp. e0258747
Author(s):  
Abigail R. Bradshaw ◽  
Carolyn McGettigan

Joint speech behaviours where speakers produce speech in unison are found in a variety of everyday settings, and have clinical relevance as a temporary fluency-enhancing technique for people who stutter. It is currently unknown whether such synchronisation of speech timing among two speakers is also accompanied by alignment in their vocal characteristics, for example in acoustic measures such as pitch. The current study investigated this by testing whether convergence in voice fundamental frequency (F0) between speakers could be demonstrated during synchronous speech. Sixty participants across two online experiments were audio recorded whilst reading a series of sentences, first on their own, and then in synchrony with another speaker (the accompanist) in a number of between-subject conditions. Experiment 1 demonstrated significant convergence in participants’ F0 to a pre-recorded accompanist voice, in the form of both upward (high F0 accompanist condition) and downward (low and extra-low F0 accompanist conditions) changes in F0. Experiment 2 demonstrated that such convergence was not seen during a visual synchronous speech condition, in which participants spoke in synchrony with silent video recordings of the accompanist. An audiovisual condition in which participants were able to both see and hear the accompanist in pre-recorded videos did not result in greater convergence in F0 compared to synchronisation with the pre-recorded voice alone. These findings suggest the need for models of speech motor control to incorporate interactions between self- and other-speech feedback during speech production, and suggest a novel hypothesis for the mechanisms underlying the fluency-enhancing effects of synchronous speech in people who stutter.


Sign in / Sign up

Export Citation Format

Share Document