Memory impairment in occipital periventricular hyperintensity patients is associated with reduced functional responses in the insula and Heschl's gyrus

2016 ◽  
Vol 127 (6) ◽  
pp. 493-500 ◽  
Author(s):  
Dazhi Duan ◽  
Lin Shen ◽  
Congyang Li ◽  
Chun Cui ◽  
Tongsheng Shu ◽  
...  
1999 ◽  
Vol 82 (5) ◽  
pp. 2346-2357 ◽  
Author(s):  
Mitchell Steinschneider ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Matthew A. Howard

Voice onset time (VOT) is an important parameter of speech that denotes the time interval between consonant onset and the onset of low-frequency periodicity generated by rhythmic vocal cord vibration. Voiced stop consonants (/b/, /g/, and /d/) in syllable initial position are characterized by short VOTs, whereas unvoiced stop consonants (/p/, /k/, and t/) contain prolonged VOTs. As the VOT is increased in incremental steps, perception rapidly changes from a voiced stop consonant to an unvoiced consonant at an interval of 20–40 ms. This abrupt change in consonant identification is an example of categorical speech perception and is a central feature of phonetic discrimination. This study tested the hypothesis that VOT is represented within auditory cortex by transient responses time-locked to consonant and voicing onset. Auditory evoked potentials (AEPs) elicited by stop consonant-vowel (CV) syllables were recorded directly from Heschl's gyrus, the planum temporale, and the superior temporal gyrus in three patients undergoing evaluation for surgical remediation of medically intractable epilepsy. Voiced CV syllables elicited a triphasic sequence of field potentials within Heschl's gyrus. AEPs evoked by unvoiced CV syllables contained additional response components time-locked to voicing onset. Syllables with a VOT of 40, 60, or 80 ms evoked components time-locked to consonant release and voicing onset. In contrast, the syllable with a VOT of 20 ms evoked a markedly diminished response to voicing onset and elicited an AEP very similar in morphology to that evoked by the syllable with a 0-ms VOT. Similar response features were observed in the AEPs evoked by click trains. In this case, there was a marked decrease in amplitude of the transient response to the second click in trains with interpulse intervals of 20–25 ms. Speech-evoked AEPs recorded from the posterior superior temporal gyrus lateral to Heschl's gyrus displayed comparable response features, whereas field potentials recorded from three locations in the planum temporale did not contain components time-locked to voicing onset. This study demonstrates that VOT at least partially is represented in primary and specific secondary auditory cortical fields by synchronized activity time-locked to consonant release and voicing onset. Furthermore, AEPs exhibit features that may facilitate categorical perception of stop consonants, and these response patterns appear to be based on temporal processing limitations within auditory cortex. Demonstrations of similar speech-evoked response patterns in animals support a role for these experimental models in clarifying selected features of speech encoding.


2003 ◽  
Vol 90 (6) ◽  
pp. 3750-3763 ◽  
Author(s):  
John F. Brugge ◽  
Igor O. Volkov ◽  
P. Charles Garell ◽  
Richard A. Reale ◽  
Matthew A. Howard

Functional connections between auditory fields on Heschl's gyrus (HG) and the acoustically responsive posterior lateral superior temporal gyrus (field PLST) were studied using electrical stimulation and recording methods in patients undergoing diagnosis and treatment of intractable epilepsy. Averaged auditory (click-train) evoked potentials were recorded from multicontact subdural recording arrays chronically implanted over the lateral surface of the superior temporal gyrus (STG) and from modified depth electrodes inserted into HG. Biphasic electrical pulses (bipolar, constant current, 0.2 ms) were delivered to HG sites while recording from the electrode array over acoustically responsive STG cortex. Stimulation of sites along the mediolateral extent of HG resulted in complex waveforms distributed over posterolateral STG. These areas overlapped each other and field PLST. For any given HG stimulus site, the morphology of the electrically evoked waveform varied across the STG map. A characteristic waveform was recorded at the site of maximal amplitude of response to stimulation of mesial HG [presumed primary auditory field (AI)]. Latency measurements suggest that the earliest evoked wave resulted from activation of connections within the cortex. Waveforms changed with changes in rate of electrical HG stimulation or with shifts in the HG stimulus site. Data suggest widespread convergence and divergence of input from HG to posterior STG. Evidence is presented for a reciprocal functional projection, from posterolateral STG to HG. Results indicate that in humans there is a processing stream from AI on mesial HG to an associational auditory field (PLST) on the lateral surface of the superior temporal gyrus.


NeuroImage ◽  
2016 ◽  
Vol 124 ◽  
pp. 96-106 ◽  
Author(s):  
Velia Cardin ◽  
Rebecca C. Smittenaar ◽  
Eleni Orfanidou ◽  
Jerker Rönnberg ◽  
Cheryl M. Capek ◽  
...  

2003 ◽  
Vol 60 (1) ◽  
pp. 69
Author(s):  
D.R. Cotter ◽  
D. Mackay ◽  
P. Falkai ◽  
C. Beasley ◽  
I. Everall

2015 ◽  
Vol 25 (03) ◽  
pp. 1550007 ◽  
Author(s):  
Darya Chyzhyk ◽  
Manuel Graña ◽  
Döst Öngür ◽  
Ann K. Shinn

Auditory hallucinations (AH) are a symptom that is most often associated with schizophrenia, but patients with other neuropsychiatric conditions, and even a small percentage of healthy individuals, may also experience AH. Elucidating the neural mechanisms underlying AH in schizophrenia may offer insight into the pathophysiology associated with AH more broadly across multiple neuropsychiatric disease conditions. In this paper, we address the problem of classifying schizophrenia patients with and without a history of AH, and healthy control (HC) subjects. To this end, we performed feature extraction from resting state functional magnetic resonance imaging (rsfMRI) data and applied machine learning classifiers, testing two kinds of neuroimaging features: (a) functional connectivity (FC) measures computed by lattice auto-associative memories (LAAM), and (b) local activity (LA) measures, including regional homogeneity (ReHo) and fractional amplitude of low frequency fluctuations (fALFF). We show that it is possible to perform classification within each pair of subject groups with high accuracy. Discrimination between patients with and without lifetime AH was highest, while discrimination between schizophrenia patients and HC participants was worst, suggesting that classification according to the symptom dimension of AH may be more valid than discrimination on the basis of traditional diagnostic categories. FC measures seeded in right Heschl's gyrus (RHG) consistently showed stronger discriminative power than those seeded in left Heschl's gyrus (LHG), a finding that appears to support AH models focusing on right hemisphere abnormalities. The cortical brain localizations derived from the features with strong classification performance are consistent with proposed AH models, and include left inferior frontal gyrus (IFG), parahippocampal gyri, the cingulate cortex, as well as several temporal and prefrontal cortical brain regions. Overall, the observed findings suggest that computational intelligence approaches can provide robust tools for uncovering subtleties in complex neuroimaging data, and have the potential to advance the search for more neuroscience-based criteria for classifying mental illness in psychiatry research.


2001 ◽  
Vol 86 (6) ◽  
pp. 2761-2788 ◽  
Author(s):  
Yonatan I. Fishman ◽  
Igor O. Volkov ◽  
M. Daniel Noh ◽  
P. Charles Garell ◽  
Hans Bakken ◽  
...  

Some musical chords sound pleasant, or consonant, while others sound unpleasant, or dissonant. Helmholtz's psychoacoustic theory of consonance and dissonance attributes the perception of dissonance to the sensation of “beats” and “roughness” caused by interactions in the auditory periphery between adjacent partials of complex tones comprising a musical chord. Conversely, consonance is characterized by the relative absence of beats and roughness. Physiological studies in monkeys suggest that roughness may be represented in primary auditory cortex (A1) by oscillatory neuronal ensemble responses phase-locked to the amplitude-modulated temporal envelope of complex sounds. However, it remains unknown whether phase-locked responses also underlie the representation of dissonance in auditory cortex. In the present study, responses evoked by musical chords with varying degrees of consonance and dissonance were recorded in A1 of awake macaques and evaluated using auditory-evoked potential (AEP), multiunit activity (MUA), and current-source density (CSD) techniques. In parallel studies, intracranial AEPs evoked by the same musical chords were recorded directly from the auditory cortex of two human subjects undergoing surgical evaluation for medically intractable epilepsy. Chords were composed of two simultaneous harmonic complex tones. The magnitude of oscillatory phase-locked activity in A1 of the monkey correlates with the perceived dissonance of the musical chords. Responses evoked by dissonant chords, such as minor and major seconds, display oscillations phase-locked to the predicted difference frequencies, whereas responses evoked by consonant chords, such as octaves and perfect fifths, display little or no phase-locked activity. AEPs recorded in Heschl's gyrus display strikingly similar oscillatory patterns to those observed in monkey A1, with dissonant chords eliciting greater phase-locked activity than consonant chords. In contrast to recordings in Heschl's gyrus, AEPs recorded in the planum temporale do not display significant phase-locked activity, suggesting functional differentiation of auditory cortical regions in humans. These findings support the relevance of synchronous phase-locked neural ensemble activity in A1 for the physiological representation of sensory dissonance in humans and highlight the merits of complementary monkey/human studies in the investigation of neural substrates underlying auditory perception.


Sign in / Sign up

Export Citation Format

Share Document