scholarly journals Correcting MEG Artifacts Caused by Overt Speech

2021 ◽  
Vol 15 ◽  
Author(s):  
Omid Abbasi ◽  
Nadine Steingräber ◽  
Joachim Gross

Recording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterize speech-related artifacts, focusing on head movements, and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts arising from head movements, induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.

2020 ◽  
Author(s):  
Omid Abbasi ◽  
Nadine Steingräber ◽  
Joachim Gross

AbstractRecording brain activity during speech production using magnetoencephalography (MEG) can help us to understand the dynamics of speech production. However, these measurements are challenging due to the induced artifacts coming from several sources such as facial muscle activity, lower jaw and head movements. Here, we aimed to characterise speech-related artifacts and subsequently present an approach to remove these artifacts from MEG data. We recorded MEG from 11 healthy participants while they pronounced various syllables in different loudness. Head positions/orientations were extracted during speech production to investigate its role in MEG distortions. Finally, we present an artifact rejection approach using the combination of regression analysis and signal space projection (SSP) in order to correct the induced artifact from MEG data. Our results show that louder speech leads to stronger head movements and stronger MEG distortions. Our proposed artifact rejection approach could successfully remove the speech-related artifact and retrieve the underlying neurophysiological signals. As the presented artifact rejection approach was shown to remove artifacts induced by overt speech in the MEG, it will facilitate research addressing the neural basis of speech production with MEG.


2003 ◽  
Vol 89 (5) ◽  
pp. 2516-2527 ◽  
Author(s):  
Laurent Petit ◽  
Michael S. Beauchamp

We used event-related fMRI to measure brain activity while subjects performed saccadic eye, head, and gaze movements to visually presented targets. Two distinct patterns of response were observed. One set of areas was equally active during eye, head, and gaze movements and consisted of the superior and inferior subdivisions of the frontal eye fields, the supplementary eye field, the intraparietal sulcus, the precuneus, area MT in the lateral occipital sulcus and subcortically in basal ganglia, thalamus, and the superior colliculus. These areas have been previously observed in functional imaging studies of human eye movements, suggesting that a common set of brain areas subserves both oculomotor and head movement control in humans, consistent with data from single-unit recording and microstimulation studies in nonhuman primates that have described overlapping eye- and head-movement representations in oculomotor control areas. A second set of areas was active during head and gaze movements but not during eye movements. This set of areas included the posterior part of the planum temporale and the cortex at the temporoparietal junction, known as the parieto-insular vestibular cortex (PIVC). Activity in PIVC has been observed during imaging studies of invasive vestibular stimulation, and we confirm its role in processing the vestibular cues accompanying natural head movements. Our findings demonstrate that fMRI can be used to study the neural basis of head movements and show that areas that control eye movements also control head movements. In addition, we provide the first evidence for brain activity associated with vestibular input produced by natural head movements as opposed to invasive caloric or galvanic vestibular stimulation.


Author(s):  
Greig I. de Zubicaray ◽  
Vitória Piai

The cognitive processes and neural mechanisms supporting language production have received considerably less research attention than those involved in language comprehension. This situation is partly attributable to the methodological challenges involved in acquiring electrophysiological and neuroimaging data during overt speech. However, current knowledge is increasing with the addition of lesion-symptom mapping studies in patients, and the application of novel brain-stimulation technologies. This chapter reviews findings from a range of studies investigating the spatial and temporal components of speech production in patients and healthy participants, with a particular emphasis on those employing psycholinguistic paradigms to identify and characterize core processes and components of the network involved in retrieving words from the mental lexicon.


2015 ◽  
Vol 29 (4) ◽  
pp. 135-146 ◽  
Author(s):  
Miroslaw Wyczesany ◽  
Szczepan J. Grzybowski ◽  
Jan Kaiser

Abstract. In the study, the neural basis of emotional reactivity was investigated. Reactivity was operationalized as the impact of emotional pictures on the self-reported ongoing affective state. It was used to divide the subjects into high- and low-responders groups. Independent sources of brain activity were identified, localized with the DIPFIT method, and clustered across subjects to analyse the visual evoked potentials to affective pictures. Four of the identified clusters revealed effects of reactivity. The earliest two started about 120 ms from the stimulus onset and were located in the occipital lobe and the right temporoparietal junction. Another two with a latency of 200 ms were found in the orbitofrontal and the right dorsolateral cortices. Additionally, differences in pre-stimulus alpha level over the visual cortex were observed between the groups. The attentional modulation of perceptual processes is proposed as an early source of emotional reactivity, which forms an automatic mechanism of affective control. The role of top-down processes in affective appraisal and, finally, the experience of ongoing emotional states is also discussed.


2012 ◽  
Vol 24 (9) ◽  
pp. 1867-1883 ◽  
Author(s):  
Bradley R. Buchsbaum ◽  
Sabrina Lemire-Rodger ◽  
Candice Fang ◽  
Hervé Abdi

When we have a rich and vivid memory for a past experience, it often feels like we are transported back in time to witness once again this event. Indeed, a perfect memory would exactly mimic the experiential quality of direct sensory perception. We used fMRI and multivoxel pattern analysis to map and quantify the similarity between patterns of activation evoked by direct perception of a diverse set of short video clips and the vivid remembering, with closed eyes, of these clips. We found that the patterns of distributed brain activation during vivid memory mimicked the patterns evoked during sensory perception. Using whole-brain patterns of activation evoked by perception of the videos, we were able to accurately classify brain patterns that were elicited when participants tried to vividly recall those same videos. A discriminant analysis of the activation patterns associated with each video revealed a high degree (explaining over 80% of the variance) of shared representational similarity between perception and memory. These results show that complex, multifeatured memory involves a partial reinstatement of the whole pattern of brain activity that is evoked during initial perception of the stimulus.


2021 ◽  
Vol 11 (2) ◽  
pp. 196
Author(s):  
Sébastien Laurent ◽  
Laurence Paire-Ficout ◽  
Jean-Michel Boucheix ◽  
Stéphane Argon ◽  
Antonio Hidalgo-Muñoz

The question of the possible impact of deafness on temporal processing remains unanswered. Different findings, based on behavioral measures, show contradictory results. The goal of the present study is to analyze the brain activity underlying time estimation by using functional near infrared spectroscopy (fNIRS) techniques, which allow examination of the frontal, central and occipital cortical areas. A total of 37 participants (19 deaf) were recruited. The experimental task involved processing a road scene to determine whether the driver had time to safely execute a driving task, such as overtaking. The road scenes were presented in animated format, or in sequences of 3 static images showing the beginning, mid-point, and end of a situation. The latter presentation required a clocking mechanism to estimate the time between the samples to evaluate vehicle speed. The results show greater frontal region activity in deaf people, which suggests that more cognitive effort is needed to process these scenes. The central region, which is involved in clocking according to several studies, is particularly activated by the static presentation in deaf people during the estimation of time lapses. Exploration of the occipital region yielded no conclusive results. Our results on the frontal and central regions encourage further study of the neural basis of time processing and its links with auditory capacity.


2019 ◽  
Author(s):  
Johannes Vosskuhl ◽  
Tuomas P. Mutanen ◽  
Toralf Neuling ◽  
Risto J. Ilmoniemi ◽  
Christoph S. Herrmann

1.AbstractBackgroundTo probe the functional role of brain oscillations, transcranial alternating current stimulation (tACS) has proven to be a useful neuroscientific tool. Because of the huge tACS-caused artifact in electroencephalography (EEG) signals, tACS–EEG studies have been mostly limited to compare brain activity between recordings before and after concurrent tACS. Critically, attempts to suppress the artifact in the data cannot assure that the entire artifact is removed while brain activity is preserved. The current study aims to evaluate the feasibility of specific artifact correction techniques to clean tACS-contaminated EEG data.New MethodIn the first experiment, we used a phantom head to have full control over the signal to be analyzed. Driving pre-recorded human brain-oscillation signals through a dipolar current source within the phantom, we simultaneously applied tACS and compared the performance of different artifact-correction techniques: sine subtraction, template subtraction, and signal-space projection (SSP). In the second experiment, we combined tACS and EEG on a human subject to validate the best-performing data-correction approach.ResultsThe tACS artifact was highly attenuated by SSP in the phantom and the human EEG; thus, we were able to recover the amplitude and phase of the oscillatory activity. In the human experiment, event-related desynchronization could be restored after correcting the artifact.Comparison with existing methodsThe best results were achieved with SSP, which outperformed sine subtraction and template subtraction.ConclusionsOur results demonstrate the feasibility of SSP by applying it to human tACS–EEG data.


2020 ◽  
Author(s):  
Natalia Barrios ◽  
Matheus Farias ◽  
Marta A Moita

AbstractAdjusting to a dynamic environment involves fast changes in the body’s internal state, characterized by coordinated alterations in brain activity, physiological and motor responses. Threat-induced defensive states are a classic example of coordinated adjustment of bodily responses, being cardiac regulation one of the best characterized in vertebrates. A great deal is known regarding the neural basis of invertebrate defensive behaviours, mainly in Drosophila melanogaster. However, whether physiological changes accompany these remains unknown. Here, we set out to describe the internal bodily state of fruit flies upon an inescapable threat and found cardiac acceleration during running and deceleration during freezing. In addition, we found that freezing leads to increased cardiac pumping from the abdomen towards the head-thorax, suggesting mobilization of energy resources. Concordantly, threat-triggered freezing reduces sugar levels in the hemolymph and renders flies less resistant to starvation. The cardiac responses observed during freezing were absent during spontaneous immobility, underscoring the active nature of freezing response. Finally, we show that baseline cardiac activity predicts the amount of freezing upon threat. This work reveals a remarkable similarity with the cardiac responses of vertebrates, suggesting an evolutionarily convergent defensive state in flies. Our findings are at odds with the widespread view that cardiac deceleration while freezing has first evolved in vertebrates and that it is energy sparing. Investigating the physiological changes coupled to defensive behaviours in the fruit fly has revealed that freezing is costly, yet accompanied by cardiac deceleration, and points to heart activity as a key modulator of defensive behaviours.


2019 ◽  
Vol 6 (3) ◽  
pp. 181908 ◽  
Author(s):  
Steven Brown ◽  
Peter Cockett ◽  
Ye Yuan

The current study represents a first attempt at examining the neural basis of dramatic acting. While all people play multiple roles in daily life—for example, ‘spouse' or ‘employee'—these roles are all facets of the ‘self' and thus of the first-person (1P) perspective. Compared to such everyday role playing, actors are required to portray other people and to adopt their gestures, emotions and behaviours. Consequently, actors must think and behave not as themselves but as the characters they are pretending to be. In other words, they have to assume a ‘fictional first-person' (Fic1P) perspective. In this functional MRI study, we sought to identify brain regions preferentially activated when actors adopt a Fic1P perspective during dramatic role playing. In the scanner, university-trained actors responded to a series of hypothetical questions from either their own 1P perspective or from that of Romeo (male participants) or Juliet (female participants) from Shakespeare's drama. Compared to responding as oneself, responding in character produced global reductions in brain activity and, particularly, deactivations in the cortical midline network of the frontal lobe, including the dorsomedial and ventromedial prefrontal cortices. Thus, portraying a character through acting seems to be a deactivation-driven process, perhaps representing a ‘loss of self'.


2020 ◽  
Vol 32 (7) ◽  
pp. 1369-1380 ◽  
Author(s):  
Nicola Binetti ◽  
Alessandro Tomassini ◽  
Karl Friston ◽  
Sven Bestmann

Timing emerges from a hierarchy of computations ranging from early encoding of physical duration (time sensation) to abstract time representations (time perception) suitable for storage and decisional processes. However, the neural basis of the perceptual experience of time remains elusive. To address this, we dissociate brain activity uniquely related to lower-level sensory and higher-order perceptual timing operations, using event-related fMRI. Participants compared subsecond (500 msec) sinusoidal gratings drifting with constant velocity (standard) against two probe stimuli: (1) control gratings drifting at constant velocity or (2) accelerating gratings, which induced illusory shortening of time. We tested two probe intervals: a 500-msec duration (Short) and a longer duration required for an accelerating probe to be perceived as long as the standard (Long—individually determined). On each trial, participants classified the probe as shorter or longer than the standard. This allowed for comparison of trials with an “Objective” (physical) or “Subjective” (perceived) difference in duration, based on participant classifications. Objective duration revealed responses in bilateral early extrastriate areas, extending to higher visual areas in the fusiform gyrus (at more lenient thresholds). By contrast, Subjective duration was reflected by distributed responses in a cortical/subcortical areas. This comprised the left superior frontal gyrus and the left cerebellum, and a wider set of common timing areas including the BG, parietal cortex, and posterior cingulate cortex. These results suggest two functionally independent timing stages: early extraction of duration information in sensory cortices and Subjective experience of duration in a higher-order cortical–subcortical timing areas.


Sign in / Sign up

Export Citation Format

Share Document