Modulation of the Auditory Cortex during Speech: An MEG Study

2002 ◽  
Vol 14 (8) ◽  
pp. 1125-1138 ◽  
Author(s):  
John F. Houde ◽  
Srikantan S. Nagarajan ◽  
Kensuke Sekihara ◽  
Michael M. Merzenich

Several behavioral and brain imaging studies have demonstrated a significant interaction between speech perception and speech production. In this study, auditory cortical responses to speech were examined during self-production and feedback alteration. Magnetic field recordings were obtained from both hemispheres in subjects who spoke while hearing controlled acoustic versions of their speech feedback via earphones. These responses were compared to recordings made while subjects listened to a tape playback of their production. The amplitude of tape playback was adjusted to match the amplitude of self-produced speech. Recordings of evoked responses to both self-produced and tape-recorded speech were obtained free of movement-related artifacts. Responses to self-produced speech were weaker than were responses to tape-recorded speech. Responses to tones were also weaker during speech production, when compared with responses to tones recorded in the presence of speech from tape playback. However, responses evoked by gated noise stimuli did not differ for recordings made during self-produced speech versus recordings made during tape-recorded speech playback. These data suggest that during speech production, the auditory cortex (1) attenuates its sensitivity and (2) modulates its activity as a function of the expected acoustic feedback.

2020 ◽  
Author(s):  
Vincent van de Ven ◽  
Lourens Waldorp ◽  
Ingrid Christoffels

AbstractThere is increasing evidence that the hippocampus is involved in language production and verbal communication, although little is known about its possible role. According to one view, hippocampus contributes semantic memory to spoken language. Alternatively, hippocampus is involved in the processing the (mis)match between expected sensory consequences of speaking and the perceived speech feedback. In the current study, we re-analysed functional magnetic resonance (fMRI) data of two overt picture-naming studies to test whether hippocampus is involved in speech production and, if so, whether the results can distinguish between a “pure memory” versus an “expectation” account of hippocampal involvement. In both studies, participants overtly named pictures during scanning while hearing their own speech feedback unimpededly or impaired by a superimposed noise mask. Results showed decreased hippocampal activity when speech feedback was impaired, compared to when feedback was unimpeded. Further, we found increased functional coupling between auditory cortex and hippocampus during unimpeded speech feedback, compared to impaired feedback. Finally, we found significant functional coupling between a hippocampal/supplementary motor area (SMA) interaction term and auditory cortex, anterior cingulate cortex and cerebellum during overt picture naming, but not during listening to one’s own pre-recorded voice. These findings indicate that hippocampus plays a role in speech production that is in accordance with an “expectation” view of hippocampal functioning.


1994 ◽  
Vol 24 (3) ◽  
pp. 749-761 ◽  
Author(s):  
I. Leudar ◽  
P. Thomas ◽  
M. Johnston

SynopsisThis paper reports results of a study on self-monitoring in speech production. Thirty schizophrenics, varying in verbal hallucination and in negative symptoms status, and 17 controls were tested on the reporter test. The position of interruptions of the speech-flow to repair errors was used to indicate whether the detection of the errors was through monitoring of internal phonetic plans or through external acoustic feedback. We have found that the internal error detection was twice as frequent in controls as in schizophrenics. The relevance of this finding to Frith's (1992) model of schizophrenia is discussed. Our conclusion is that the problem with internal monitoring of phonetic plans is common to all schizophrenics, and not just to those with verbal hallucinations.


2017 ◽  
Vol 25 (1) ◽  
pp. 423-430 ◽  
Author(s):  
Kayoko Okada ◽  
William Matchin ◽  
Gregory Hickok

2020 ◽  
Vol 11 (1) ◽  
Author(s):  
K. J. Forseth ◽  
G. Hickok ◽  
P. S. Rollo ◽  
N. Tandon

Abstract Spoken language, both perception and production, is thought to be facilitated by an ensemble of predictive mechanisms. We obtain intracranial recordings in 37 patients using depth probes implanted along the anteroposterior extent of the supratemporal plane during rhythm listening, speech perception, and speech production. These reveal two predictive mechanisms in early auditory cortex with distinct anatomical and functional characteristics. The first, localized to bilateral Heschl’s gyri and indexed by low-frequency phase, predicts the timing of acoustic events. The second, localized to planum temporale only in language-dominant cortex and indexed by high-gamma power, shows a transient response to acoustic stimuli that is uniquely suppressed during speech production. Chronometric stimulation of Heschl’s gyrus selectively disrupts speech perception, while stimulation of planum temporale selectively disrupts speech production. This work illuminates the fundamental acoustic infrastructure—both architecture and function—for spoken language, grounding cognitive models of speech perception and production in human neurobiology.


2004 ◽  
Vol 92 (6) ◽  
pp. 3522-3531 ◽  
Author(s):  
Kai-Ming G. Fu ◽  
Ankoor S. Shah ◽  
Monica N. O'Connell ◽  
Tammy McGinnis ◽  
Haftan Eckholdt ◽  
...  

We examined effects of eye position on auditory cortical responses in macaques. Laminar current-source density (CSD) and multiunit activity (MUA) profiles were sampled with linear array multielectrodes. Eye position significantly modulated auditory-evoked CSD amplitude in 24/29 penetrations (83%), across A1 and belt regions; 4/24 cases also showed significant MUA AM. Eye-position effects occurred mainly in the supragranular laminae and lagged the co-located auditory response by, on average, 38 ms. Effects in A1 and belt regions were indistinguishable in amplitude, laminar profile, and latency. The timing and laminar profile of the eye-position effects suggest that they are not combined with auditory signals at a subcortical stage of the lemniscal auditory pathways and simply “fed-forward” into cortex. Rather, these effects may be conveyed to auditory cortex by feedback projections from parietal or frontal cortices, or alternatively, they may be conveyed by nonclassical feedforward projections through auditory koniocellular (calbindin positive) neurons.


2010 ◽  
Vol 30 (4) ◽  
pp. 1314-1321 ◽  
Author(s):  
J. Kauramaki ◽  
I. P. Jaaskelainen ◽  
R. Hari ◽  
R. Mottonen ◽  
J. P. Rauschecker ◽  
...  

2014 ◽  
Vol 369 (1651) ◽  
pp. 20130297 ◽  
Author(s):  
Jeremy I. Skipper

What do we hear when someone speaks and what does auditory cortex (AC) do with that sound? Given how meaningful speech is, it might be hypothesized that AC is most active when other people talk so that their productions get decoded. Here, neuroimaging meta-analyses show the opposite: AC is least active and sometimes deactivated when participants listened to meaningful speech compared to less meaningful sounds. Results are explained by an active hypothesis-and-test mechanism where speech production (SP) regions are neurally re-used to predict auditory objects associated with available context. By this model, more AC activity for less meaningful sounds occurs because predictions are less successful from context, requiring further hypotheses be tested. This also explains the large overlap of AC co-activity for less meaningful sounds with meta-analyses of SP. An experiment showed a similar pattern of results for non-verbal context. Specifically, words produced less activity in AC and SP regions when preceded by co-speech gestures that visually described those words compared to those words without gestures. Results collectively suggest that what we ‘hear’ during real-world speech perception may come more from the brain than our ears and that the function of AC is to confirm or deny internal predictions about the identity of sounds.


1995 ◽  
Vol 18 (2) ◽  
pp. 344-345 ◽  
Author(s):  
Peter T. Fox

AbstractEncoding articulate speech is widely accepted as the principal (or sole) role of the frontal operculum. Clinical observations of speech apraxia have been confirmed by brain-imaging studies of speech production. We present evidence that the frontal operculum also programs limb movements. We argue that this area is a ventral counterpart of the dorsal premotor area. The two are functionally distinguished by specialization for somatic and visual space, respectively.


2019 ◽  
Author(s):  
Kelly K Chong ◽  
Alex G Dunlap ◽  
Dorottya B Kacsoh ◽  
Robert C Liu

SUMMARYFrequency modulations are an inherent feature of many behaviorally relevant sounds, including vocalizations and music. Changing trajectories in a sound’s frequency often conveys meaningful information, which can be used to differentiate sound categories, as in the case of intonations in tonal languages. However, it is not clear what features of the neural responses in what parts of the auditory cortical pathway might be more important for conveying information about behaviorally relevant frequency modulations, and how these responses change with experience. Here we uncover tuning to subtle variations in frequency trajectories in mouse auditory cortex. Surprisingly, we found that auditory cortical responses could be modulated by variations in a pure tone trajectory as small as 1/24th of an octave. Offset spiking accounted for a significant portion of tuned responses to subtle frequency modulation. Offset responses that were present in the adult A2, but not those in Core auditory cortex, were plastic in a way that enhanced the representation of an acquired behaviorally relevant sound category, which we illustrate with the maternal mouse paradigm for natural communication sound learning. By using this ethologically inspired sound-feature tuning paradigm to drive auditory responses in higher-order neurons, our results demonstrate that auditory cortex can track much finer frequency modulations than previously appreciated, which allows A2 offset responses in particular to attune to the pitch trajectories that distinguish behaviorally relevant, natural sound categories.


Sign in / Sign up

Export Citation Format

Share Document