overt speech
Recently Published Documents


TOTAL DOCUMENTS

89
(FIVE YEARS 36)

H-INDEX

22
(FIVE YEARS 1)

2022 ◽  
Vol 13 (1) ◽  
Author(s):  
Timothée Proix ◽  
Jaime Delgado Saa ◽  
Andy Christen ◽  
Stephanie Martin ◽  
Brian N. Pasley ◽  
...  

AbstractReconstructing intended speech from neural activity using brain-computer interfaces holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech has met limited success, mainly because the associated neural signals are weak and variable compared to overt speech, hence difficult to decode by learning algorithms. We obtained three electrocorticography datasets from 13 patients, with electrodes implanted for epilepsy evaluation, who performed overt and imagined speech production tasks. Based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future brain computer interfaces, and assessed their performance to discriminate speech items in articulatory, phonetic, and vocalic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings show that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding.


2021 ◽  
Vol 22 (1) ◽  
Author(s):  
David Hassanein Berro ◽  
Jean-Michel Lemée ◽  
Louis-Marie Leiber ◽  
Evelyne Emery ◽  
Philippe Menei ◽  
...  

Abstract Background Pre-surgical mapping of language using functional MRI aimed principally to determine the dominant hemisphere. This mapping is currently performed using covert linguistic task in way to avoid motion artefacts potentially biasing the results. However, overt task is closer to natural speaking, allows a control on the performance of the task, and may be easier to perform for stressed patients and children. However, overt task, by activating phonological areas on both hemispheres and areas involved in pitch prosody control in the non-dominant hemisphere, is expected to modify the determination of the dominant hemisphere by the calculation of the lateralization index (LI). Objective Here, we analyzed the modifications in the LI and the interactions between cognitive networks during covert and overt speech task. Methods Thirty-three volunteers participated in this study, all but four were right-handed. They performed three functional sessions consisting of (1) covert and (2) overt generation of a short sentence semantically linked with an audibly presented word, from which we estimated the “Covert” and “Overt” contrasts, and a (3) resting-state session. The resting-state session was submitted to spatial independent component analysis to identify language network at rest (LANG), cingulo-opercular network (CO), and ventral attention network (VAN). The LI was calculated using the bootstrapping method. Results The LI of the LANG was the most left-lateralized (0.66 ± 0.38). The LI shifted from a moderate leftward lateralization for the Covert contrast (0.32 ± 0.38) to a right lateralization for the Overt contrast (− 0.13 ± 0.30). The LI significantly differed from each other. This rightward shift was due to the recruitment of right hemispheric temporal areas together with the nodes of the CO. Conclusion Analyzing the overt speech by fMRI allowed improvement in the physiological knowledge regarding the coordinated activity of the intrinsic connectivity networks. However, the rightward shift of the LI in this condition did not provide the basic information on the hemispheric language dominance. Overt linguistic task cannot be recommended for clinical purpose when determining hemispheric dominance for language.


2021 ◽  
Vol 40 ◽  
pp. 93-111
Author(s):  
Izabela Sekścińska

The article summarizes the current state of understanding of the concept of inner speech and evaluates the role of the internal language in the speech generation process. First, the available definitions of inner speech are presented and its features are briefly characterised. Subsequently, the inner voice is compared to overt speech and the main differences between those two planes of speech: the internal and the external one are outlined. Since the aim of the paper is to show the role of inner speech in overt speech production, a speech generation model which coalesces Levelt‘s (1993) assumptions with the stratifi cational approach to language is presented. Different stages of linguistic processing are described and the impact of internal languaging on linguistic output is discussed. It is claimed that inner speech plays a threefold role in overt speech production: (1) provides an inter-nal draft for external speech, (2) is vital for the self-monitoring system, and (3) supports working memory. Any impairment in the functioning of inner speech may thus lead to speech errors and slips of the tongue phenomena.


2021 ◽  
Author(s):  
Ladislas Nalborczyk ◽  
Ursula Debarnot ◽  
Marieke Longcamp ◽  
Aymeric Guillot ◽  
F.-Xavier Alario

Covert speech is accompanied by a subjective multisensory experience with auditory and kinaesthetic components. An influential hypothesis states that these sensory percepts result from a simulation of the corresponding motor action that relies on the same internal models recruited for the control of overt speech. This simulationnist view raises the question of how it is possible to imagine speech without executing it. In this perspective, we discuss the possible role(s) played by motor inhibition during covert speech production. We suggest that considering covert speech as an inhibited form of overt speech maps naturally to the purported progressive internalisation of overt speech during childhood. However, we argue that the role of motor inhibition may differ widely across different forms of covert speech (e.g., condensed vs. expanded covert speech) and that considering this variety helps reconciling seemingly contradictory findings from the neuroimaging literature.


Author(s):  
Diego L Lorca-Puls ◽  
Andrea Gajardo-Vidal ◽  
Ploras Team ◽  
Marion Oberhuber ◽  
Susan Prejawa ◽  
...  

Abstract By combining functional neuroimaging and a wide range of tasks that place varying demands on speech production, Lorca-Puls et al. reveal that right cerebellar Crus I and right pars opercularis are likely to play a particularly important role in supporting successful speech production following damage to Broca’s area. Broca’s area in the posterior half of the left inferior frontal gyrus has traditionally been considered an important node in the speech production network. Nevertheless, recovery of speech production has been reported, to different degrees, within a few months of damage to Broca’s area. Importantly, contemporary evidence suggests that, within Broca’s area, its posterior part (i.e. pars opercularis) plays a more prominent role in speech production than its anterior part (i.e. pars triangularis). In the current study, we therefore investigated the brain activation patterns that underlie accurate speech production following stroke damage to the opercular part of Broca’s area. By combining functional MRI and 13 tasks that place varying demands on speech production, brain activation was compared in (i) seven patients of interest with damage to the opercular part of Broca’s area, (ii) 55 neurologically-intact controls and (iii) 28 patient controls with left-hemisphere damage that spared Broca’s area. When producing accurate overt speech responses, the patients with damage to the left pars opercularis activated a substantial portion of the normal bilaterally distributed system. Within this system, there was a lesion-site-dependent effect in a specific part of the right cerebellar Crus I where activation was significantly higher in the patients with damage to the left pars opercularis compared to both neurologically-intact and patient controls. In addition, activation in the right pars opercularis was significantly higher in the patients with damage to the left pars opercularis relative to neurologically-intact controls but not patient controls (after adjusting for differences in lesion size). By further examining how right Crus I and right pars opercularis responded across a range of conditions in the neurologically-intact controls, we suggest that these regions play distinct roles in domain-general cognitive control. Finally, we show that enhanced activation in the right pars opercularis cannot be explained by release from an inhibitory relationship with the left pars opercularis (i.e. dis-inhibition) because right pars opercularis activation was positively related to left pars opercularis activation in neurologically-intact controls. Our findings motivate and guide future studies to investigate (a) how exactly right Crus I and right pars opercularis support accurate speech production after damage to the opercular part of Broca’s area and (b) whether non-invasive neurostimulation to one or both of these regions boosts speech production recovery after damage to the opercular part of Broca’s area.


2021 ◽  
Vol 1 ◽  
pp. 15
Author(s):  
Dorothy V. M. Bishop ◽  
Clara R. Grabitz ◽  
Sophie C. Harte ◽  
Kate E. Watkins ◽  
Miho Sasaki ◽  
...  

Background: Lateralised language processing is a well-established finding in monolinguals. In bilinguals, studies using fMRI have typically found substantial regional overlap between the two languages, though results may be influenced by factors such as proficiency, age of acquisition and exposure to the second language. Few studies have focused specifically on individual differences in brain lateralisation, and those that have suggested reduced lateralisation may characterise representation of the second language (L2) in some bilingual individuals. Methods: In Study 1, we used functional transcranial Doppler sonography (FTCD) to measure cerebral lateralisation in both languages in high proficiency bilinguals who varied in age of acquisition (AoA) of L2. They had German (N = 14) or French (N = 10) as their first language (L1) and English as their second language. FTCD was used to measure task-dependent blood flow velocity changes in the left and right middle cerebral arteries during phonological word generation cued by single letters. Language history measures and handedness were assessed through self-report. Study 2 followed a similar format with 25 Japanese (L1) /English (L2) bilinguals, with proficiency in their second language ranging from basic to advanced, using phonological and semantic word generation tasks with overt speech production. Results: In Study 1, participants were significantly left lateralised for both L1 and L2, with a high correlation (r = .70) in the size of laterality indices for L1 and L2. In Study 2, again there was good agreement between LIs for the two languages (r = .77 for both word generation tasks). There was no evidence in either study of an effect of age of acquisition, though the sample sizes were too small to detect any but large effects.  Conclusion: In proficient bilinguals, there is strong concordance for cerebral lateralisation of first and second language as assessed by a verbal fluency task.


2021 ◽  
Vol 15 ◽  
Author(s):  
Omid Abbasi ◽  
Nadine Steingräber ◽  
Joachim Gross

Recording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterize speech-related artifacts, focusing on head movements, and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts arising from head movements, induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.


2021 ◽  
Author(s):  
Amie Fairs ◽  
Kristof Strijkers

The closure of cognitive psychology labs around the world due to the COVID-19 pandemic has prevented in-person testing. This has caused a particular challenge for speech production researchers, as before the pandemic there were no studies demonstrating that reliable overt speech production data could be collected via the internet. Here, we present evidence that both accurate and reliable overt articulation data can be collected from internet-based speech production experiments. We tested 100 participants in a picture naming paradigm, where we manipulated the word and phonotactic frequency of the picture names. We compared our results to a lab-based study which used the same materials and design. We found a significant word frequency effect but no phonotactic frequency effect, fully replicating the lab-based results. Effect sizes were similar between experiments, but with significantly longer latencies in the internet-collected data. We found no evidence that internet upload or download speed affected either naming latencies or errors. In addition, we carried out a permutation-style analysis which recommends a minimum sample size of 40 participants for online production paradigms. In sum, our study demonstrates that internet-based testing of speech production is a feasible and promising endeavour, with less challenges than many researchers (anecdotally) assumed.


2021 ◽  
Author(s):  
Brielle C Stark ◽  
Julianne M Alexander

Purpose: While behavioral aphasia therapy is beneficial (Brady et al., 2016), we do not fully understand factors that predict therapy response, or that contribute to extra-linguistic aspects of living with aphasia (e.g., psychosocial). The purpose of this Viewpoint is to postulate that inner speech – the ability to talk to oneself in one’s head – may be an important factor. However, prior work evaluating inner speech in aphasia has been limited in scope. Here, we innovatively draw from interdisciplinary evidence to discuss a more comprehensive view of inner speech and propose how evaluating a multidimensional inner speech may be meaningful in understanding living with aphasia and aphasia recovery. Methods: We give an interdisciplinary overview of inner speech, as it relates to aphasia. Results: Research with persons with aphasia shows that inner speech can be relatively spared in comparison to overt speech. However, this research has taken a narrow view of inner speech, defining inner speech as a covert ‘voice’ drawn upon during experimental tasks, such as object naming, rhyme decisions, or tongue twisters. Cross disciplinary research evaluating inner speech has identified its multidimensionality (specifically, dimensions of intentionality, condensation, and dialogality). Inner speech evaluated across these dimensions in neurotypical populations has shown that inner speech can be related to personal factors like self-awareness; retain phonetic features but also be like ‘thinking in pure pictures; and be both monologic and dialogic. Conclusions: Quantifying a multidimensional inner speech in aphasia will enable future work elaborating on factors related to extra-linguistic and linguistic processes of recovery, as well as living well with aphasia.


2021 ◽  
Author(s):  
Timothée Proix ◽  
Jaime Delgado Saa ◽  
Andy Christen ◽  
Stephanie Martin ◽  
Brian N. Pasley ◽  
...  

SummaryReconstructing intended speech from neural activity using brain-computer interfaces (BCIs) holds great promises for people with severe speech production deficits. While decoding overt speech has progressed, decoding imagined speech have met limited success, mainly because the associated neural signals are weak and variable hence difficult to decode by learning algorithms. Using three electrocorticography datasets totalizing 1444 electrodes from 13 patients who performed overt and imagined speech production tasks, and based on recent theories of speech neural processing, we extracted consistent and specific neural features usable for future BCIs, and assessed their performance to discriminate speech items in articulatory, phonetic, vocalic, and semantic representation spaces. While high-frequency activity provided the best signal for overt speech, both low- and higher-frequency power and local cross-frequency contributed to successful imagined speech decoding, in particular in phonetic and vocalic, i.e. perceptual, spaces. These findings demonstrate that low-frequency power and cross-frequency dynamics contain key information for imagined speech decoding, and that exploring perceptual spaces offers a promising avenue for future imagined speech BCIs.


Sign in / Sign up

Export Citation Format

Share Document